Menu

Number Base Converter

Convert numbers between binary, octal, decimal, and hexadecimal.

Last updated

Bits6
  • Binarybase 2
  • Octalbase 8
  • Decimalbase 10
  • Hexadecimalbase 16

What is a number base converter?

A number base converter takes a number written in one numeral system — binary, octal, decimal, or hexadecimal — and rewrites it in any of the others. Developers reach for it when reading bitmasks, debugging color codes, decoding flags in protocol packets, working with hardware registers, and translating between hex and decimal in everything from CSS to assembly to crypto.

The number itself doesn't change between bases — only the *representation* does. 255 in decimal is 0xFF in hex, 0b11111111 in binary, and 0o377 in octal. They are four different ways of writing the same value.

Modern tools should handle big numbers without precision loss. JavaScript's regular parseInt silently rounds anything past 2⁵³, which causes subtle bugs with 64-bit register values, IDs, and timestamps. A solid converter uses BigInt under the hood so a 256-bit hex value round-trips perfectly.

What you'll learn while converting bases

  • Binary uses 2 digits (0, 1), octal uses 8, decimal uses 10, and hexadecimal uses 16 (0–9, a–f).
  • Every binary digit is one *bit*. 4 bits = 1 hex digit. 8 bits = 1 byte = 2 hex digits.
  • Hex is just a compact way to read binary. 0xCAFE is 1100 1010 1111 1110 in binary — easier on the eyes.

How to convert between bases step by step

  1. Pick the input base

    Choose binary, octal, decimal, or hexadecimal — whichever matches the number you have.

  2. Type the value

    Hex accepts both ff and 0xff. Binary accepts both 1010 and 0b1010. The converter strips the prefix automatically.

  3. Read every base at once

    All four representations of the same number appear instantly. Click any one to copy it.

  4. Toggle digit grouping for readability

    Group binary by 4 bits (1100 1010 1111 1110) and hex by 2 chars (CA FE). Useful for inspecting bitfields and dumps.

Number base quick reference

Common reference values across all four bases. JavaScript's BigInt and numeric literal grammar cover the syntax used below.

DecimalBinaryOctalHex
0000
1111
81000108
10101012A
16100002010
321000004020
64100000010040
100110010014464
1281000000020080
25511111111377FF
256100000000400100
1024100000000002000400
655351111111111111111177777FFFF

Number base conversion examples to try

RGB color codes

Hex
color: #ff8800;
Decimal
color: rgb(255, 136, 0);

Each pair of hex digits is one byte (0–255) for one channel. ff = 255, 88 = 136, 00 = 0.

Read a permission bitmask

Decimal

5

Binary

0b101

Reading the bits

Bit 0 (read) is set, bit 2 (execute) is set, bit 1 (write) is not. So permissions are read + execute.

Unix file modes (chmod), feature flags, and protocol packets all encode multiple booleans as bits in a single integer. Binary view makes that visible.

BigInt-precise conversion

Hex (very large)

0xFFFFFFFFFFFFFFFF

Decimal

18446744073709551615

This is the maximum unsigned 64-bit integer. JavaScript's normal parseInt would lose precision; BigInt handles it cleanly.

Common base-conversion mistakes

  • Confusing the *number* with the *literal*. 0x10 and 10 look similar but are 16 and 10 — always include the base prefix in code.
  • Reading hex left-to-right and forgetting the byte order (endianness) on raw memory dumps.
  • Using parseInt(big, 16) in JavaScript on values larger than 2⁵³. Use BigInt('0x' + hex) instead.

Number Base Converter FAQ

How do I convert binary to decimal?
Multiply each binary digit by the corresponding power of 2, starting from the right. For 1010: 1·8 + 0·4 + 1·2 + 0·1 = 10. Or paste it into a base converter for the instant answer.
How do I convert decimal to hex?
Repeatedly divide the decimal number by 16, recording the remainders (0–15, with 10–15 written as A–F). Read the remainders bottom-up. 255 ÷ 16 = 15 remainder 15FF.
What is hexadecimal used for?
Hex is used wherever a compact, readable form of binary is helpful: color codes (#ff8800), memory addresses, file dumps, MAC addresses, hashes, and most low-level systems.
What's the difference between octal and hexadecimal?
Octal is base 8 (digits 0–7); hexadecimal is base 16 (digits 0–9, A–F). Octal is rare today but still appears in Unix file permissions (0755) and some C-family number literals.
Why do binary numbers grow so fast?
Each new digit doubles the number of representable values. 8 bits represents 256 values; 16 bits represents 65,536; 32 bits represents over 4 billion. That's why bitmasks pack a lot of meaning into a small number.

Learn more

Other developer tools

Learn to code with Coddy

GET STARTED