Picture the scene. You have built a Next.js application that uses Puppeteer to generate PDFs. It works. Locally, it works beautifully. The PDFs render, the logic holds, you are satisfied. You deploy to Vercel. then
Error: Could not find Chrome (ver. 138.0.7204.94).**
This can occur if either:
1. you did not perform an installation before running the script
2. your cache path is incorrectly configured
A deeply unwelcome message.
After some investigation, the situation became clear: Puppeteer bundles a Chromium binary that Vercel's serverless environment cannot accommodate. The fix suggested by various LLM Tools, rewrite everything using React PDF or pay handsomely for an external browser service, was unsatisfying. A better path existed. Here it is.
The Original Setup
The local configuration was straightforward:
// Original working local setup
import puppeteer from "puppeteer";
export async function generateCertificatePDF(data: CertificateData): Promise<Buffer> {
const browser = await puppeteer.launch({
headless: true,
args: [
"--no-sandbox",
"--disable-setuid-sandbox",
"--disable-web-security",
],
});
const page = await browser.newPage();
await page.setContent(htmlContent);
const pdfBuffer = await page.pdf({ format: "A4" });
await browser.close();
return pdfBuffer;
}
Why it worked locally -> Puppeteer downloads and bundles Chromium at roughly 300MB. A local machine accommodates this without complaint.
Why it failed on Vercel -> Vercel enforces a 50MB function bundle size limit. Puppeteer's bundled Chromium is six times that figure. Serverless environments do not include Chrome by default and will not search for it on your behalf.
A Second Problem — The Database Corruption
But the worst part wasn’t just the Chrome error. My app had a critical bug:
// DANGEROUS: This was updating the database even when PDF generation failed!
await updateInternCertificateKey(intern.email, s3Key); // s3Key was NULL!
When PDF generation failed, the database was still being updated, with a NULL S3 key. The consequence: users could not regenerate their PDFs because the application believed generation had already succeeded. The application then attempted to retrieve NULL links. Production data was silently corrupted.
Both problems required fixing.
Three Approaches Considered
Approach 1: React PDF
Rewrite all PDF generation using @react-pdf/renderer. Serverless-friendly at roughly 4MB, no browser dependency, reliable. The cost: a complete rewrite of existing logic, and meaningfully less rendering flexibility.
// React PDF approach - clean but requires rewrite
import { pdf, Document, Page, Text } from '@react-pdf/renderer';
const MyDocument = () => (
<Document>
<Page>
<Text>Certificate content...</Text>
</Page>
</Document>
);
const buffer = await pdf(<MyDocument />).toBuffer();
Approach 2
External Browser Services. Services such as Browserless.io provide remote Chrome instances via WebSocket. Powerful, complete, and approximately $200 per month. Not appropriate for projects without that budget.
// Browserless approach - works but costs money
const browser = await puppeteer.connect({
browserWSEndpoint: `wss://chrome.browserless.io?token=${process.env.BLESS_TOKEN}`
});
Approach 3: Serverless Chromium
Serverless Chromium. The community-maintained @sparticuz/chromium package — built specifically for serverless environments, sized to fit within bundle limits, actively maintained, and free. It keeps your existing Puppeteer code substantially intact.
The Implementation
Step 1 — Package changes:
npm uninstall puppeteer
npm install @sparticuz/chromium puppeteer-core
npm install --save-dev puppeteer
Step 2 — Environment-aware browser launch:
The key insight is that development and production require different strategies. One function handles both:
async function getBrowser() {
if (process.env.NODE_ENV === 'production') {
const [chromiumModule, puppeteerModule] = await Promise.all([
import('@sparticuz/chromium'),
import('puppeteer-core')
]);
const chromium = chromiumModule.default;
const puppeteer = puppeteerModule.default;
return puppeteer.launch({
args: [...chromium.args, '--no-sandbox', '--disable-setuid-sandbox'],
defaultViewport: chromium.defaultViewport,
executablePath: await chromium.executablePath(),
headless: chromium.headless,
ignoreHTTPSErrors: true,
});
} else {
const puppeteerModule = await import('puppeteer');
const puppeteer = puppeteerModule.default;
return puppeteer.launch({
headless: true,
args: ["--no-sandbox", "--disable-setuid-sandbox"],
});
}
}
Step 3 — Next.js configuration:
// next.config.ts
const nextConfig: NextConfig = {
serverExternalPackages: ['puppeteer-core', '@sparticuz/chromium'],
webpack: (config, { isServer }) => {
if (isServer) {
config.externals.push('@sparticuz/chromium');
}
return config;
},
};
The broader lesson, stated plainly: the environment in which your code runs is as much a part of your design constraints as the code itself. What works locally is not a guarantee. Know your deployment target before you commit to your dependencies.