From Binary to Code: Why JavaScript Devs Need to Know Bits and Bytes

From Binary to Code: Why JavaScript Devs Need to Know Bits and Bytes

Understanding what bit and byte are might sound more like a computer science topic that is mostly irrelevant to JavaScript developers. However, this is not true. It is a practical skill that not only makes you a better software developer, but enables you to solve a real-world problems in JavaScript.

This article will explain the basics of bits and bytes and how computers use them. After that, you'll see why it is important to understand this topic and how to apply this knowledge in JavaScript.


In the previous article, we learned about different numeric systems, how they can be used, and why we need them. One of the numeric systems was the binary numeric system.

The binary numeric system has only two numbers: 0 and 1. At its most fundamental level, everything on your computer is just 0s and 1s. Only two numbers are enough to build any software you can imagine.

This is possible because the combination of 0 and 1 is considered the smallest and fundamental unit of information, also known as bit.

For example, with only one bit, you can represent the following information:

  • To conclude, whether some statement is true or not. It is either true or false.

  • To answer if somebody asks you for a donation. It is either yes or no.

  • To sign an agreement. You either sign—agreeing with it or don't sign.

All the above examples have only two possible outcomes, which are 0 or 1. True or false.

One of the interesting aspects of a bit is that the meaning of a particular bit is completely contextual. The same bit in different contexts means completely different things.

Let's continue with the signature example. Writing your signature on a blank piece of paper has little value because there is no context to it. If you leave your signature under the employment contract, it means that you accept all contract conditions. Leaving the same signature under a bank check makes this check valid for cashing.

You see how the information is the same in all three cases. However, the interpretation of this information is completely different and solely depends on the context.

Having said that, you can convey only a tiny amount of information using one bit. If you want to know where three of your friends donated to a charity, you can use 3 bits for it:

NameActionResult in binary
MarryDidn't donate0

Notice, there are no complex decision trees or anything like that. It is a simple true or false answer.


The byte is a group of 8 bits. It was introduced to deliver more information in a standardized way. Byte simplifies the abstraction in the same way as plain numbers do. Imagine if you have one million pens. What a nice word, a million. It wouldn't be that easy to operate with it if you were to represent the number as one thousand thousand, right?

A byte has a range of possible values, starting from 00000000 to 11111111 or 256 decimal values ranging from 0 to 255.

A single byte is enough to store most English letters. We can introduce an additional byte when it is not enough, like in the case of the Chinese language. With two bytes, you have 65,536 possible options to encode a character. We'll talk more about encoding in the next article.

Another common use case of a byte is color, particularly RGB (red, green, blue) color model. A single byte represents each letter in the RGB model, or 3 bytes for the final color. Only 3 bytes make it possible to have 16,777,216 different colors.

When talking about computers, there are displays with different color models; some of them are RGB displays. Every pixel in such displays takes the exact 3 bytes for an RGB color.

Bytes and memory

Now we know that bit is the smallest unit of information possible, and a single byte is a group of 8 bits. One of the practical applications of this knowledge is related to files and file systems on a computer.

A text file? Depending on the encoding, each letter could take one, two, or three bytes.

An image? Images are composed of pixels, and every pixel is basically a set of bytes. The exact number of bytes required to store a single pixel heavily varies on a color model.

Every byte of a text file, image, video, audio, etc. is directly stored on your computer. When you download a file, it shows you its size so you can understand how much space it takes on your computer.

Representation of how text and image files are using bytes to store information

It is the fundamental concept, and every operating system works in the same way. What is different, though, is the order in which bytes are stored. The concept describing the byte order is called endian.

There are two types of endians: big-endian and little-endian. Big-endian stores the most significant byte first. The most significant byte is the one that plays the most significant role in a piece of data.

For example, ISO date (2060-10-22) follows the big-endian because the year, the most significant part of the date, is placed in the first place, followed by months, and only then the day of the month—the least significant number.

On the other hand, if you take a look at the standard European way of writing dates (24 July 2060), you'll see that it follows the little-endian. The least significant number—the day of the months—is first, followed by the number of months, and only then goes the year.

Difference between big endian and little endian on different types of dates

Why choose hexadecimal over binary

Here is how an encoded text looks in the binary numeric system.

01001000 01100101 01101100 01101100 01101111 00100000 01110111 01101111 01110010 01101100 01100100 00100001

The code above uses ASCII/UTF-8 encoding. When you decode the binary to a string, you'll see "Hello world!" But what if we have more than two words? The binary gets massive fast. To address this problem, you can use a hexadecimal numeric system instead.

Here is how the same text is encoded in hexadecimal:

48 65 6c 6c 6f 20 77 6f 72 6c 64 21

It is 4 times shorter, but the meaning is the same.

The other nice thing about binary-to-hexadecimal conversion is that every byte can be represented by only two hexadecimal numbers. The smallest possible value of a byte 00000000 is written in hexadecimal as 00, and the highest possible value of 11111111 is just FF.

The short alternative to binary is the exact reason why hexadecimal has become so widely used.

Why knowing bits and bytes is useful for a JavaScript developer

It is all nice and interesting, but how do you actually apply this knowledge as a JavaScript developer in your work?

Bitwise operations

Bitwise operators are common when it comes to working with:

  • Cryptography

  • Different kinds of 2D and 3D graphics

  • High-performance operations when dealing with large datasets

They're meant for specific tasks, and you won't use them that often. At the same time, sometimes implementing a feature without using bitwise operators is hardly possible.

Understanding memory consumption by different language structures

Each structure in JavaScript language takes up a specific size in memory. Understanding the bit and byte concept allows you to write memory-efficient programs.

The actual size of any given structure depends on specific JavaScript engine implementations. We'll look at how V8, the most popular engine, allocates memory for some of the commonly used JavaScript structures.

Strings. Every character in JavaScript is encoded in UTF-16. It means that 2 bytes are required to store a single character. For example, the "hello" string consists of 5 characters and takes 10 bytes or 80 bits of memory.

Numbers. The allocated memory depends on what type of number you're dealing with. It's either 31-bit signed or 64-bit with double precision point.

Booleans. For the sake of simplicity, a boolean takes 1 byte.

Working with files, blobs, and buffers

Any file on your computer is just a set of bytes. Dealing with a file is the equivalent of dealing with a set of bytes.

It is one of the most common operations in JavaScript. We build applications where users can upload files, download files, and sometimes change files. Change their name and their encoding format, and compress or decompress them.

We usually use blobs and buffers for operations on files. Performing efficient operations over files requires a proper understanding of bits and bytes.


A bit is the building block of all digital information. It is the smallest unit of information possible, represented by only two numbers, 0 and 1.

A byte is an abstraction created to work with a large number of bits. A single byte is a set of 8 bits.

For JavaScript developers, understanding bits and bytes isn't just theory. It has real, practical uses:

  • Bitwise operations for tasks like cryptography

  • Optimizing how much memory your app uses

  • Working with files, blobs, and buffers efficiently

In our next article, we'll explore different encodings, showing how bits and bytes come together in Unicode, the most widely used encoding scheme.