A quad-core processor can fetch and execute four sets of instructions in the time it takes a dual-core processor to do the same with just two. But the amount of information a quad-core CPU can handle simultaneous depends on a lot more than just the design of the processing unit itself, however. Generally, each core has a single thread. But as we’ll see, thanks to advances in CPU technology like multitasking and hyperthreading, this isn’t a hard and fast rule. In what follows, we’re finally going to answer that nagging question, “Just how many threads can a quad-core process at once?”
To find out how many threads quad-cores can process at once. Let’s take a few steps back to see how the multi-core CPU (central processing unit) was developed and its relationship to threads.
Until 2001 processors only had a single core. Any improvements in processing capacity or tweaks for speed increases always happened within this single unit. The Pentium series of processors provided the training ground for incredible advancements in CPU technology throughout the 1990s. But the single-core architecture had a huge problem.
Higher processing speeds along with users’ growing appetite for multitasking with several apps and multiple tabs open at a time, began to overwhelm the single core processor. This led to freezes, lagging, and even overheating of the motherboard. Then came the idea of splitting the workload between two processor cores within one CPU.
Each core is essentially an autonomous processor, with its own resources. So, a dual-core processor can handle a couple of separate ‘threads’ of instructions.
A thread is a sequence of instructions from a piece of software or even hardware either within the laptop or peripherals plugged into it, like a mouse or keyboard that’s necessary for their function. Through a process known as virtualization, a single core can have up to two threads if the CPU is equipped with a technology called, SMT (simultaneous threading).
Similarly, a quad-core CPU can handle at least four individual threads of instructions. This improvement in processor architecture minimized glitches, lagging, and the dreaded overheating because the process had more resources to work on more information at once.
The earlier generations of quad-core processors could only handle one thread per core. This is becoming rather obsolete. Especially with the growing demands of the gaming world.
It would be extremely hard to find a modern Intel processor that works with this antiquated one-on-one arrangement. As for AMD, their mainstream processors are already way above that level. However, if you look, you can find the AMD Athlon™ Gold 3150G, which still uses the 4 threads per 4 cores model.
Newer generations of quad-core processors employ an enhanced system that assigns two threads per core. Namely the SMT technology, which Intel calls, hyperthreading. Some examples are the Intel Core i7-11390H and the Intel Core i3-11100HE. AMD names its version of multithreading, Clustered Multithreading. An example of one of their APUs utilizing this technology is the AMD Ryzen 3 5400U. All these processors have 4 cores and 8 threads.
The Dual Threads Per Core Model
Thus, quad-core processors can work with up to 8 threads simultaneously. They’re ability to do so depends on:
- The specifications of the processor.
- The type of data needing processing.
- And the number of apps you’re using at any given time.
This two-to-one relationship between threads and cores isn’t a formula that’s consistent with every type of processor. For most use cases this still applies. But when it comes to specialized devices or servers, CPUs can have even more threads per core.
We’ve seen 4-way multithreading with the Intel Xeon Phi server-grade processors. Here 64 cores have 256 threads. Also, 8-way multithreading with the Oracle Sparc T5 microprocessors, in which the 16 cores get 128 threads.
The way this works again is by employing the mechanism known as multi-threading technology. While Intel calls it hyperthreading, AMD nicknames it HyperTransport. Since it seems the term ‘hyperthreading’ caught on more than others, we’ll go by that!
Hyperthreading is a virtual process that creates a logical processing unit (sometimes also known as virtual cores). These logical processing units or virtual cores are different from the physical processing units in that they don’t actually exist until they’re needed – hence the prefix “virtual.”
These logical processors are created when sets of data coming into the CPU are arranged into a sort of digital ‘conveyer belt.’ Once rearranged, these data sets then move into any available core for actual processing.
Without the multithreaded technology’s conveyer belt system, incoming data waits until the core finishes processing a batch of data completely. Then the next batch is ready to be processed. Clearly, multiple threading minimizes downtime from the cycle, making the CPU more efficient. And resulting in a significant increase in throughput.
They’re both clever devices used to trick the operating system into doing more with less. But that’s where similarities end.
Hyperthreading is a ‘parallel’ process. It relies on dividing the incoming instruction sets and data into manageable chunks. After that, it feeds the data to any available core for processing. At any given time, there’s a queue of data being handled by a ‘logical processing unit.’
Multitasking, by contrast, is a ‘series’ process. The instructions coming from an app are sent into the core for processing until another app sends a request to go live. The interruptive signal is then prioritized by a context switch.
The competing sets of data are then processed alternately. The core works on each one for a little chunk of time then moves on to the next set. So, as you can see whether human or machine. Multitasking refers to working on one set of tasks for a while, stopping. Then working on a new set of tasks.
Working with multiple threads improves the efficiency of the processor cores. But this falls short of doubling a core’s processing power. Employing multiple-threading tech usually only increases a core’s throughput by 25-30%, which is even less than half a core’s total power.
The increase in processing efficiency also depends on how the software is built. Some older apps were designed to work with a single thread. So, no matter how many threads are available, there’s no difference in processing speed.
But newer apps (especially games) are built with hyperthreading in mind. They’re able to utilize the full availability of multiple threads to reduce processing time. Some even employ shortcuts to facilitate the communication between the graphics cards and the main CPU. While capitalizing on any available threads. This is optimization of resources at its best!
Gaming apps top the list when it comes to processor-hungry software. Even quad-core processors with double threading capability still seem to need more!
Games are always requiring various forms of data handling. And that’s why dividing these tasks over multiple threads. Then feeding them to the core processor always speeds up the game. Real-time action, elaborate settings, and swift response games like Overwatch are all much better on a quad-core CPU that can utilize up to 8 threads simultaneously.
Rendering applications are to some extent similar to games. They too involve loading large amounts of data that can be further subdivided over the available threads. Thus, the core is always fed a chunk of information with maximum throughput and no downtime at all.
From this brief description of how things work inside the new generation of processors. It’s easier to understand the underlying difference between a core and a thread: The core is a physical entity capable of processing data. And the thread is a set of instructions or data that need processing. Usually, each thread is associated with an app. And CPUs are coming now with both higher core counts and multithreading capabilities as our desire to do more work efficiently grows.