Base64 in JavaScript and Node.js: Lessons Learned from Real Projects

Base64 Team
December 2, 2025

Honestly, Base64 seems simple at first, but there are more gotchas than you'd expect. I remember the first time I tried to encode Chinese characters with btoa() - it just threw an error. Took me a while to figure out that the browser's Base64 API only handles ASCII characters.

In this article, I want to share my experience working with Base64 in JavaScript and Node.js over the past few years - the pitfalls I've encountered, the solutions I found, and some practical tips for handling real-world scenarios.

Let's Start with the Browser: btoa and atob

If you're working in a browser environment, the most straightforward way is using the global btoa() and atob() functions. These function names are pretty weird: btoa stands for "binary to ASCII" and atob for "ASCII to binary". I mixed them up at first too, then just memorized them the hard way.

const text = "Hello, World!";
const encoded = btoa(text);
console.log(encoded); // SGVsbG8sIFdvcmxkIQ==

const decoded = atob(encoded);
console.log(decoded); // Hello, World!

Looks simple, right? But here's where things get tricky.

First Gotcha: Unicode and Special Characters

I was working on an internationalization project once, where I needed to encode user input and put it in the URL. Naturally, I wrote:

const userInput = "δ½ ε₯½δΈ–η•Œ";
const encoded = btoa(userInput); // πŸ’₯ Error!

The console immediately threw an InvalidCharacterError. After digging around, I learned that btoa() can only handle Latin1 characters (0-255). When it encounters Unicode characters, it just fails.

The workaround I found back then was this:

function encode(str) {
  // First use encodeURIComponent to convert Unicode to %XX format
  // Then unescape to convert to bytes
  // Finally btoa to encode
  return btoa(unescape(encodeURIComponent(str)));
}

function decode(str) {
  // Reverse operation
  return decodeURIComponent(escape(atob(str)));
}

const text = "Hello δΈ–η•Œ πŸŽ‰";
const encoded = encode(text);
console.log(encoded);
const decoded = decode(encoded);
console.log(decoded); // Hello δΈ–η•Œ πŸŽ‰

This method works, but looking back, using unescape and escape (which are deprecated APIs) always felt inelegant. Later, I discovered a more modern approach:

function encodeUTF8(str) {
  const encoder = new TextEncoder();
  const uint8Array = encoder.encode(str);

  // Convert Uint8Array to regular string
  let binaryString = '';
  for (let i = 0; i < uint8Array.length; i++) {
    binaryString += String.fromCharCode(uint8Array[i]);
  }

  return btoa(binaryString);
}

function decodeUTF8(base64) {
  const binaryString = atob(base64);
  const uint8Array = new Uint8Array(binaryString.length);

  for (let i = 0; i < binaryString.length; i++) {
    uint8Array[i] = binaryString.charCodeAt(i);
  }

  const decoder = new TextDecoder();
  return decoder.decode(uint8Array);
}

Using TextEncoder and TextDecoder makes the code logic clearer, and these are standard APIs, so you don't have to worry about them being deprecated.

Real-World Scenario: File Upload

I had a project that required converting user-selected images to Base64 on the frontend and uploading them to the server. Here's the approach I used:

async function uploadImage(file) {
  // Check file size first - Base64 will make it even larger
  if (file.size > 5 * 1024 * 1024) { // 5MB
    alert('File too large, consider using FormData upload instead');
    return;
  }

  const base64 = await readFileAsBase64(file);

  // Send to server
  const response = await fetch('/api/upload', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      filename: file.name,
      data: base64
    })
  });

  return response.json();
}

function readFileAsBase64(file) {
  return new Promise((resolve, reject) => {
    const reader = new FileReader();

    reader.onload = () => {
      // reader.result is in format: data:image/png;base64,xxxxx
      // If you only need the Base64 part, remove the data URL prefix
      const dataUrl = reader.result;
      const base64 = dataUrl.split(',')[1];
      resolve(base64);
    };

    reader.onerror = () => reject(reader.error);
    reader.readAsDataURL(file);
  });
}

One important thing to note here: FileReader.readAsDataURL() returns a complete Data URL in the format data:image/png;base64,iVBORw0KG.... If your backend only expects the Base64 string, remember to use split(',')[1] to extract just the latter part. I forgot this once and spent ages debugging why the backend kept failing to parse it.

Node.js Buffer: Finally, Everything Just Works

After switching to Node.js, it felt like the whole world became peaceful. Node.js's Buffer class has excellent Base64 support and handles Unicode effortlessly.

// Encoding
const text = "Hello δΈ–η•Œ πŸŽ‰";
const base64 = Buffer.from(text, 'utf8').toString('base64');
console.log(base64);

// Decoding
const decoded = Buffer.from(base64, 'base64').toString('utf8');
console.log(decoded); // Hello δΈ–η•Œ πŸŽ‰

That's it. No need for encodeURIComponent, unescape, or any of that hassle. Just done.

Handling Files: Node.js Version

Working with files in Node.js is even more convenient. For example, I had a requirement to read a PDF file from the server, convert it to Base64, and return it to the frontend.

const fs = require('fs').promises;

async function getFileAsBase64(filePath) {
  try {
    const buffer = await fs.readFile(filePath);
    return buffer.toString('base64');
  } catch (error) {
    console.error('Failed to read file:', error);
    throw error;
  }
}

// Conversely, save Base64 as a file
async function saveBase64AsFile(base64String, outputPath) {
  const buffer = Buffer.from(base64String, 'base64');
  await fs.writeFile(outputPath, buffer);
}

// Usage
(async () => {
  const pdfBase64 = await getFileAsBase64('./document.pdf');
  console.log(`Base64 length: ${pdfBase64.length}`);

  // Save to new file
  await saveBase64AsFile(pdfBase64, './document-copy.pdf');
})();

This method works great for small files. But once I used it to process a 200MB video file, and the program just crashed with an OOM (out of memory) error. I learned my lesson - for large files, use streaming:

const fs = require('fs');
const { Transform } = require('stream');

function convertLargeFileToBase64(inputPath, outputPath) {
  return new Promise((resolve, reject) => {
    const readStream = fs.createReadStream(inputPath);
    const writeStream = fs.createWriteStream(outputPath);

    // Create a transform stream
    const base64Stream = new Transform({
      transform(chunk, encoding, callback) {
        // Convert each chunk to Base64
        this.push(chunk.toString('base64'));
        callback();
      }
    });

    readStream
      .pipe(base64Stream)
      .pipe(writeStream)
      .on('finish', () => {
        console.log('Conversion complete');
        resolve();
      })
      .on('error', reject);
  });
}

Using streams keeps memory usage under control. But honestly, there's really no need to convert large files to Base64 - transmitting binary data directly is much more efficient.

URL-Safe Base64: A Tiny Detail That Caused a Big Bug

I was working on an API signature feature once, where I encoded some parameters into Base64 and put them in the URL. During testing, I found that occasionally requests would fail. After investigating for ages, I discovered that standard Base64 uses + and / characters, which have special meanings in URLs.

For example, this URL:

https://api.example.com/verify?token=abc+def/ghi=

The + gets interpreted as a space, and / as a path separator - complete chaos.

The solution is to use URL-safe Base64: replace + with -, / with _, and remove the trailing = (you can add it back when decoding anyway).

// URL-safe Base64 encoding
function base64UrlEncode(str) {
  const base64 = Buffer.from(str, 'utf8').toString('base64');
  return base64
    .replace(/\+/g, '-')
    .replace(/\//g, '_')
    .replace(/=/g, ''); // Remove padding
}

// URL-safe Base64 decoding
function base64UrlDecode(base64url) {
  // Replace characters back
  let base64 = base64url
    .replace(/-/g, '+')
    .replace(/_/g, '/');

  // Add padding back (Base64 length must be multiple of 4)
  while (base64.length % 4) {
    base64 += '=';
  }

  return Buffer.from(base64, 'base64').toString('utf8');
}

// Test
const data = { userId: 12345, timestamp: Date.now() };
const token = base64UrlEncode(JSON.stringify(data));
console.log(`https://api.example.com/verify?token=${token}`);

// Decode
const decoded = JSON.parse(base64UrlDecode(token));
console.log(decoded);

This little trick is especially useful when working with JWT, short URLs, or API tokens.

Some Real-World Scenarios and Tricks

Inlining Small Icons

I had a project where we needed to optimize initial page load speed, so I inlined some small icons (a few KB of SVG) directly into the CSS.

const fs = require('fs').promises;

async function svgToDataUrl(svgPath) {
  const svgContent = await fs.readFile(svgPath, 'utf8');

  // SVGs don't need Base64, URL encoding is more efficient
  // But if you want to use Base64, you can
  const base64 = Buffer.from(svgContent, 'utf8').toString('base64');

  return `data:image/svg+xml;base64,${base64}`;
}

// Generate CSS
const iconUrl = await svgToDataUrl('./icon.svg');
console.log(`
.icon {
  background-image: url("${iconUrl}");
}
`);

Later I discovered that for SVGs, using Base64 actually increases the size. A better approach is to directly URL encode the SVG content:

function svgToDataUrlOptimized(svgContent) {
  // Simple compression: remove extra spaces and newlines
  const minified = svgContent
    .replace(/\s+/g, ' ')
    .replace(/>\s+</g, '><')
    .trim();

  // Direct URL encode, no Base64
  const encoded = encodeURIComponent(minified)
    .replace(/'/g, '%27')
    .replace(/"/g, '%22');

  return `data:image/svg+xml,${encoded}`;
}

This generates a data URL that's about 30% smaller than the Base64 version.

Canvas Screenshot Feature

I worked on an online editor that needed to export Canvas content as images.

function exportCanvas(canvas, format = 'png') {
  // canvas.toDataURL() returns Base64 format image
  const dataUrl = canvas.toDataURL(`image/${format}`, 0.9); // 0.9 is quality

  // To download
  const link = document.createElement('a');
  link.download = `export-${Date.now()}.${format}`;
  link.href = dataUrl;
  link.click();
}

// To upload to server, convert data URL to Blob
function dataUrlToBlob(dataUrl) {
  const arr = dataUrl.split(',');
  const mime = arr[0].match(/:(.*?);/)[1];
  const base64 = arr[1];

  // Convert Base64 to binary
  const binaryString = atob(base64);
  const len = binaryString.length;
  const bytes = new Uint8Array(len);

  for (let i = 0; i < len; i++) {
    bytes[i] = binaryString.charCodeAt(i);
  }

  return new Blob([bytes], { type: mime });
}

// Upload with FormData (more efficient than sending Base64 string directly)
async function uploadCanvasAsFile(canvas) {
  const dataUrl = canvas.toDataURL('image/png');
  const blob = dataUrlToBlob(dataUrl);

  const formData = new FormData();
  formData.append('image', blob, 'canvas.png');

  await fetch('/api/upload', {
    method: 'POST',
    body: formData
  });
}

LQIP (Low Quality Image Placeholder)

This is a technique I really like. For lazy-loaded images, first show a 20x20 blurred thumbnail, then replace it when the real image loads.

const sharp = require('sharp'); // Need to install sharp library

async function generateLQIP(imagePath) {
  // Generate a tiny blurred image
  const tinyImage = await sharp(imagePath)
    .resize(20, 20, { fit: 'inside' })
    .blur(2)
    .toBuffer();

  const base64 = tinyImage.toString('base64');
  return `data:image/jpeg;base64,${base64}`;
}

// Use in HTML
const placeholder = await generateLQIP('./hero.jpg');
const html = `
  <img
    src="${placeholder}"
    data-src="hero.jpg"
    class="blur-up"
    onload="this.src = this.dataset.src; this.classList.add('loaded');"
  />

  <style>
    .blur-up { filter: blur(10px); transition: filter 0.3s; }
    .blur-up.loaded { filter: blur(0); }
  </style>
`;

This approach generates a placeholder that's only a few hundred bytes, but the user experience improvement is significant. Medium uses this technique.

Some Thoughts on Performance

Base64 has an unavoidable issue: it increases size by 33%. I ran some tests:

function compareSize(data) {
  const original = Buffer.byteLength(data);
  const base64 = Buffer.from(data).toString('base64');
  const encoded = Buffer.byteLength(base64);

  console.log(`Original size: ${original} bytes`);
  console.log(`Base64:        ${encoded} bytes`);
  console.log(`Increase:      ${encoded - original} bytes (${((encoded/original - 1) * 100).toFixed(1)}%)`);
}

compareSize('x'.repeat(1000)); // Test 1KB data
// Original size: 1000 bytes
// Base64:        1336 bytes
// Increase:      336 bytes (33.6%)

So my rule of thumb is:

Good scenarios for Base64:

  • Small files (< 10KB), like icons and thumbnails
  • Data that needs to be embedded in JSON or text protocols
  • Binary data that needs to be transmitted with text

Don't use Base64 for:

  • Large files (> 100KB) - use binary transmission directly
  • Frequently accessed resources - CDN + caching is better
  • Mobile weak network environments - that 33% size increase really hurts

I once used Base64 to inline about a dozen icons on a mobile H5 page, totaling around 100KB. Looking at the data later, I found that 4G users' first screen time slowed by 500ms. After switching to sprite images, the speed improved noticeably.

Security Issue: Base64 Is NOT Encryption

This needs to be crystal clear. I've seen people encode passwords with Base64 and store them in localStorage, thinking it's "secure". This is completely wrong.

// ❌ This is NOT encryption, anyone can decode it
const password = "mypassword";
const encoded = btoa(password); // bXlwYXNzd29yZA==
localStorage.setItem('pwd', encoded);

// Anyone can get the plaintext like this
const decoded = atob(localStorage.getItem('pwd'));
console.log(decoded); // mypassword

Base64 is just encoding, not encryption. It has no key, anyone can decode it. If you really need encryption, use the Web Crypto API:

// This is actual encryption
async function encryptPassword(password, key) {
  const encoder = new TextEncoder();
  const data = encoder.encode(password);

  const encrypted = await crypto.subtle.encrypt(
    { name: 'AES-GCM', iv: new Uint8Array(12) },
    key,
    data
  );

  // You can convert encrypted data to Base64 for storage
  return Buffer.from(encrypted).toString('base64');
}

Another common security trap: HTTP Basic Authentication.

// Basic Auth just Base64 encodes "username:password" and puts it in the header
const credentials = btoa('admin:123456');
fetch('/api/data', {
  headers: {
    'Authorization': `Basic ${credentials}`
  }
});

This is completely insecure over HTTP - it's basically plaintext transmission. Always use it with HTTPS.

Some Debugging Tips

Validate if Base64 is Valid

function isValidBase64(str) {
  // Basic format check
  if (!/^[A-Za-z0-9+/]*={0,2}$/.test(str)) {
    return false;
  }

  // Length must be multiple of 4
  if (str.length % 4 !== 0) {
    return false;
  }

  // Try to decode
  try {
    const decoded = atob(str);
    // If it can be encoded back and equals, it's valid
    return btoa(decoded) === str;
  } catch (e) {
    return false;
  }
}

console.log(isValidBase64('SGVsbG8=')); // true
console.log(isValidBase64('Hello!')); // false

Detect Base64 Content Type

Sometimes you get a Base64 string and don't know what type of file it is. You can decode it and check the file header:

function detectBase64Type(base64) {
  const buffer = Buffer.from(base64, 'base64');

  // Check file header (magic number)
  const hex = buffer.toString('hex', 0, 4);

  const signatures = {
    '89504e47': 'image/png',
    'ffd8ffe0': 'image/jpeg',
    'ffd8ffe1': 'image/jpeg',
    '47494638': 'image/gif',
    '25504446': 'application/pdf',
    '504b0304': 'application/zip',
  };

  return signatures[hex] || 'unknown';
}

const pngBase64 = '...'; // Some image Base64
console.log(detectBase64Type(pngBase64)); // image/png

Final Recommendations

After using Base64 for years, I've gathered a few insights:

  1. Always convert to UTF-8 first when handling Unicode in browser environments. Don't ask me how I know - I've debugged too many garbled text issues.

  2. Use Buffer directly in Node.js, don't reinvent the wheel. Node.js's Buffer is fast and reliable, no need to reimplement it.

  3. Don't use Base64 for large files. For files over 100KB, use binary transmission directly. I've seen someone convert a 10MB video to Base64, and the browser just froze.

  4. Watch out for special characters in URLs. If you need to put Base64 in a URL, remember to use the URL-safe version, or wrap it with encodeURIComponent.

  5. Base64 is NOT encryption. Can't emphasize this enough. If you need security, use real encryption algorithms.

  6. Be careful with memory. Base64 encoding creates a new string in memory. For large files, memory usage is 2-3 times the original file size (original buffer + Base64 string).

  7. Consider compression. If the Base64 content is JSON or other text data, gzip it first before encoding - it saves a lot of space.

const zlib = require('zlib');

async function compressAndEncode(data) {
  return new Promise((resolve, reject) => {
    zlib.gzip(Buffer.from(data), (err, compressed) => {
      if (err) reject(err);
      else resolve(compressed.toString('base64'));
    });
  });
}

const json = JSON.stringify({ /* lots of data */ });
const base64Only = Buffer.from(json).toString('base64');
const compressed = await compressAndEncode(json);

console.log(`Base64 only: ${base64Only.length} bytes`);
console.log(`Compressed + Base64: ${compressed.length} bytes`);
// For data with high redundancy, can save 60-80%

Well, that's my experience and lessons learned using Base64 over the years. Honestly, Base64 itself isn't complicated, but in real projects, there are always various details to watch out for. Hope this article helps you avoid some pitfalls.

If you have any questions or better practices, feel free to share.

Related Tools