Moore’s Law, Exponential Growth, and the Rise of Modern AI

Moore’s Law explained with data analysis in PyTorch, exponential growth modeling, and its impact on modern AI.

 

Exponential Growth of Moore's Law

Introduction

In this blog we will learn about Moore's law, and how it leads to current AI revolutions, it has played vital role in growth of modern technology and it's computing power.

What is Moore's Law?

Intel Co-founder Gordon Moore in 1965 observed that on a dense integrated circuit the number of transistors doubles in approximately every two years. The continuous exponentially growth leads to higher processing power and efficiency.

Growth of Transistors in a dense Integrated circuit over the year.


With this growth computers started getting more powerful and cheaper at the same time.

Why Does This Matter?

Modern chips have reached nanometer scales, allowing billions of transistors to fit on a tiny surface. The shift toward 2 nm (nanometer) chips had made practical impact on our daily life.

1. Faster Computers & Professional Work

In the past speed meant how fast a program can open. Now it means your computer can do things that were previously impossible without a supercomputer.

2. Smaller, Sleeker Devices

As transistors shrink we started fitting more brainpower into smaller spaces for Example Smartphones, Smartwatches etc.

FeatureThe "Old" WayThe 2026 Way
SmartphonesFast apps, 1-day battery.AI Agents, 2-day battery, on-device privacy.
WearablesBulky, basic fitness tracking.Invisible "Smart Rings" & medical-grade sensors.
AI ToolsSlow, cloud-dependent, expensive.Instant, local, and often built-in for free.
Data CentersMassive energy "hogs."Liquid-cooled "AI Factories" powered by custom ASICs.

Let's look at the notebook

You can get Moore's Law Dataset easily on Kaggle.

Loading Moore's Law Dataset
 

Messy Dataset


   





Cleaning the Data

Cleaning the dataset
  First we will create a separate columns by using iloc on the first row and split the data ( ; )

  Now we will create an empty list for a row and loop it from second row till the end and flatten the values, after splitting each row by the ( ; ) and append it to the row list.


The data will look something like this: 

Clean data

Pre-processing the Data / Normalization of data

For Pre-process you can use the StandardScalar by importing from sklearn library.

Normalizing the data is nothing but the subtracting the mean of the data and dividing by it's standard deviation.

Data Normalization
here mx and my are the mean of x and y respectively. while sx and sy are standard deviation.

we scaled the data into a range so the model don't start over-fitting as the data has exponential growth some values are much smaller while others are much larger.

Now that the data is normalize it will follow the Gaussian's distribution curve or bell like curve in certain range.




Creating a Model

Simple Linear Regression Model

First we will convert the data into float32 as PyTorch support float32 rather than float64.
we will use simple linear model with one input and one output using PyTorch nn module.

our loss will be Mean Squared Loss (MSE) its a very good criterion for regression losses.
We will use Standard Gradient Descent algorithm for optimizing the model with the learning rate as 0.01 and the momentum as 0.7.

Learning rate is a step size to determine how fast model should move toward the minimum and the momentum accumulate past gradient to speed up the process for example if you are moving down a mountain, the learning rate is a size of step you take and the momentum allow you to bypass the small bump by gaining speed.

Training a Model

Training a Linear Regression Model
  Epoch is a number of loop we will take    to train our model.

  We will store each loop losses in an       empty list for plotting a graph to check     how our model behave in every step it   took.

  Initially optimizer stores some default gradients so we will use zero_grad to make sure gradient in optimizer are zero.

  After that we will put the input inside model and get the result and compare that result with the targets and call it a loss.

  Loss means how far we are from actual target and store it to our list then we inform model to back-propagate the loss
 or move backward using the chain rule of calculus and take a step, we will repeat this process till the loop end.

Graph of loss


 Plotting the losses Graph stored in an empty list we found out that initially our our model was far off more than 1.4 with each loop it started correcting itself and after 25 turns or loop the model loss value was minimized and by the time model reach its 50th loop it was flat meaning the Gradient descent was at the minimum.



Making final Prediction

Line of Best Fit

Checking the Predicted Result





As you can see the line fitted perfectly but wait the data supposed to be exponential but it's shows linear growth why?

Actually before plotting we took the log of y as exponential growth will show a curve and our line is linear so we made some changes to satisfy the data and line after that we check the rate of change it shows around "rate_of_change : 1.3860122239080883" and the time to double the transistor "time_to_double_the_transistors : 2.1234128330502604" that proved the Moore's Law is correct.

Conclusion

Moore’s Law is not just a historical observation — it explains why computing power has grown so rapidly over time.

By analyzing the dataset and applying linear regression, we saw that transistor growth follows an exponential pattern. When we transformed the data using a logarithm, the exponential curve became linear, which allowed us to model it properly.

The calculated rate of change and doubling time confirm that transistor counts roughly double every two years — supporting Moore’s original observation.

This steady increase in computing power is one of the biggest reasons why modern AI and deep learning became possible. Without exponential hardware growth, training large neural networks would not be practical.

Understanding Moore’s Law reminds us that AI progress is not only about better algorithms — it is also about better hardware.

As computing continues to evolve, AI will continue to grow alongside it.





NextGen Digital... Welcome to WhatsApp chat
Howdy! How can we help you today?
Type here...