Moore’s Law, Exponential Growth, and the Rise of Modern AI
| Exponential Growth of Moore's Law |
Introduction
In this blog we will learn about Moore's law, and how it leads to current AI revolutions, it has played vital role in growth of modern technology and it's computing power.
What is Moore's Law?
| Growth of Transistors in a dense Integrated circuit over the year. |
Why Does This Matter?
Modern chips have reached nanometer scales, allowing billions of transistors to fit on a tiny surface. The shift toward 2 nm (nanometer) chips had made practical impact on our daily life.
1. Faster Computers & Professional Work
In the past speed meant how fast a program can open. Now it means your computer can do things that were previously impossible without a supercomputer.
2. Smaller, Sleeker Devices
| Feature | The "Old" Way | The 2026 Way |
| Smartphones | Fast apps, 1-day battery. | AI Agents, 2-day battery, on-device privacy. |
| Wearables | Bulky, basic fitness tracking. | Invisible "Smart Rings" & medical-grade sensors. |
| AI Tools | Slow, cloud-dependent, expensive. | Instant, local, and often built-in for free. |
| Data Centers | Massive energy "hogs." | Liquid-cooled "AI Factories" powered by custom ASICs. |
Let's look at the notebook
You can get Moore's Law Dataset easily on Kaggle.
| Loading Moore's Law Dataset |
| Messy Dataset |
Cleaning the Data
| Cleaning the dataset |
Now we will create an empty list for a row and loop it from second row till the end and flatten the values, after splitting each row by the ( ; ) and append it to the row list.
The data will look something like this:
| Clean data |
Pre-processing the Data / Normalization of data
For Pre-process you can use the StandardScalar by importing from sklearn library.
Normalizing the data is nothing but the subtracting the mean of the data and dividing by it's standard deviation.
| Data Normalization |
Creating a Model
![]() |
| Simple Linear Regression Model |
First we will convert the data into float32 as PyTorch support float32 rather than float64.
Training a Model
![]() |
| Training a Linear Regression Model |
We will store each loop losses in an empty list for plotting a graph to check how our model behave in every step it took.
Initially optimizer stores some default gradients so we will use zero_grad to make sure gradient in optimizer are zero.
After that we will put the input inside model and get the result and compare that result with the targets and call it a loss.
Loss means how far we are from actual target and store it to our list then we inform model to back-propagate the loss
or move backward using the chain rule of calculus and take a step, we will repeat this process till the loop end.
| Graph of loss |
Plotting the losses Graph stored in an empty list we found out that initially our our model was far off more than 1.4 with each loop it started correcting itself and after 25 turns or loop the model loss value was minimized and by the time model reach its 50th loop it was flat meaning the Gradient descent was at the minimum.
Making final Prediction
| Line of Best Fit |
![]() |
| Checking the Predicted Result |
As you can see the line fitted perfectly but wait the data supposed to be exponential but it's shows linear growth why?
Actually before plotting we took the log of y as exponential growth will show a curve and our line is linear so we made some changes to satisfy the data and line after that we check the rate of change it shows around "rate_of_change : 1.3860122239080883" and the time to double the transistor "time_to_double_the_transistors : 2.1234128330502604" that proved the Moore's Law is correct.
Conclusion
Moore’s Law is not just a historical observation — it explains why computing power has grown so rapidly over time.
By analyzing the dataset and applying linear regression, we saw that transistor growth follows an exponential pattern. When we transformed the data using a logarithm, the exponential curve became linear, which allowed us to model it properly.
The calculated rate of change and doubling time confirm that transistor counts roughly double every two years — supporting Moore’s original observation.
This steady increase in computing power is one of the biggest reasons why modern AI and deep learning became possible. Without exponential hardware growth, training large neural networks would not be practical.
Understanding Moore’s Law reminds us that AI progress is not only about better algorithms — it is also about better hardware.
As computing continues to evolve, AI will continue to grow alongside it.



Join the conversation