I am complete beginner when it comes to working with large numbers in code and I know this is definitely the wrong approach but this is what I started with.

import tqdm try: total = 0 for num in tqdm.tqdm(range(2**100), total=2**100): total += len(bin(num)) - 2 finally: with open('results.txt', 'w') as file: file.write(f'{total=}')

The result I got was:

0%| | 87580807/1267650600228229401496703205376 [00:39<159887459362604133471:34:24, 2202331.37it/s]

Obviously this approach is going to take **way too long**. I know I could try making this multi-core but I don’t think this is going to make much of a difference in the speed.

What are my options here?

Will using another language like C significantly increase the speed so that it will take days or hours instead of eons? Is there another approach/algorithm I can use?

## Answer

Ok I figured it out. I used @jasonharper’s approach.

So the code would be following:

total = 0 for power in range(1, 101): total += ((int('1' * power, base=2) - int('1' + '0' * (power - 1), base=2)) + 1) * power

`total`

was equal to 125497409422594710748173617332225, which represents the number of bytes needed to store every number between 1 and 2^100.

For some context it would take ≈425414947195.2363 times the total storage capacity of the Earth to store all numbers between 1 and 2^100.

Reference: https://www.zdnet.com/article/what-is-the-worlds-data-storage-capacity/