⚔ AI Battle

ChatGPT vs Copilot: Debug a Failing API Endpoint

The Prompt
I have a Node.js Express API endpoint that returns 500 errors intermittently. The endpoint fetches user data from PostgreSQL and joins it with order history. It works fine locally but fails ~30% of the time in production. Here's the code:\n\n```javascript\napp.get('/api/users/:id/orders', async (req, res) => {\n const { id } = req.params;\n const client = await pool.connect();\n const userResult = await client.query('SELECT * FROM users WHERE id = $1', [id]);\n if (!userResult.rows.length) return res.status(404).json({ error: 'User not found' });\n const ordersResult = await client.query(\n 'SELECT o.*, p.name as product_name FROM orders o JOIN products p ON o.product_id = p.id WHERE o.user_id = $1 ORDER BY o.created_at DESC',\n [id]\n );\n res.json({ user: userResult.rows[0], orders: ordersResult.rows });\n});\n```\n\nThe error log shows: `Error: Connection terminated unexpectedly`. What's causing this and how do I fix it?
GPT-4o Copilot — Correct Fix, Generic Diagnosis
## The Problem: Connection Leak Your code has a **connection pool leak**. You call `pool.connect()` to acquire a client from the pool, but you **never release it back**. Every request grabs a connection and holds it forever. Here's what's happening in production: 1. Each request calls `pool.connect()`, borrowing a connection from the pool 2. The default `pg` pool size is **10 connections** 3. After ~10 concurrent requests, the pool is exhausted 4. New requests wait for a connection. When the wait exceeds `connectionTimeoutMillis` (default 0 = no timeout in older versions), or when PostgreSQL itself kills idle connections (common with managed databases like RDS/Supabase which enforce `idle_in_transaction_session_timeout`), you get `Connection terminated unexpectedly` 5. The ~30% failure rate corresponds to traffic spikes that exhaust the pool ### Why it works locally Locally, you're probably the only user, so you never exhaust 10 connections. And your local PostgreSQL doesn't aggressively kill idle connections. ### The Fix You **must** release the client in a `finally` block: ```javascript app.get('/api/users/:id/orders', async (req, res) => { const { id } = req.params; const client = await pool.connect(); try { const userResult = await client.query('SELECT * FROM users WHERE id = $1', [id]); if (!userResult.rows.length) { return res.status(404).json({ error: 'User not found' }); } const ordersResult = await client.query( 'SELECT o.*, p.name as product_name FROM orders o JOIN products p ON o.product_id = p.id WHERE o.user_id = $1 ORDER BY o.created_at DESC', [id] ); res.json({ user: userResult.rows[0], orders: ordersResult.rows }); } catch (err) { console.error('Query failed:', err); res.status(500).json({ error: 'Internal server error' }); } finally { client.release(); } }); ``` ### Even Better: Skip `pool.connect()` Entirely Since you're not using transactions, you don't need a dedicated client. Use `pool.query()` directly — it automatically acquires and releases connections: ```javascript app.get('/api/users/:id/orders', async (req, res) => { try { const { id } = req.params; const userResult = await pool.query('SELECT * FROM users WHERE id = $1', [id]); if (!userResult.rows.length) { return res.status(404).json({ error: 'User not found' }); } const ordersResult = await pool.query( 'SELECT o.*, p.name as product_name FROM orders o JOIN products p ON o.product_id = p.id WHERE o.user_id = $1 ORDER BY o.created_at DESC', [id] ); res.json({ user: userResult.rows[0], orders: ordersResult.rows }); } catch (err) { console.error('Query failed:', err); res.status(500).json({ error: 'Internal server error' }); } }); ``` ### Production Hardening Also configure your pool to catch this class of bug earlier: ```javascript const pool = new Pool({ max: 20, // Increase from default 10 idleTimeoutMillis: 30000, // Release idle clients after 30s connectionTimeoutMillis: 5000, // Fail fast instead of hanging }); ```
Microsoft Copilot Copilot — Correct Fix, Generic Diagnosis
It looks like you might be running into a connection issue with your PostgreSQL database. The `Connection terminated unexpectedly` error usually means the connection to your database is being dropped. Here are a few things to check: **1. Database connection limits** Your production database might have a connection limit. Check your PostgreSQL `max_connections` setting and make sure your app isn't exceeding it. **2. Network issues** If your production server is in a different region than your database, network timeouts could cause connections to drop. Consider adjusting your connection timeout settings. **3. Connection pooling** Make sure you're using connection pooling properly. Here's an updated version of your code with better error handling: ```javascript app.get('/api/users/:id/orders', async (req, res) => { try { const { id } = req.params; const client = await pool.connect(); try { const userResult = await client.query('SELECT * FROM users WHERE id = $1', [id]); if (!userResult.rows.length) { return res.status(404).json({ error: 'User not found' }); } const ordersResult = await client.query( 'SELECT o.*, p.name as product_name FROM orders o JOIN products p ON o.product_id = p.id WHERE o.user_id = $1 ORDER BY o.created_at DESC', [id] ); res.json({ user: userResult.rows[0], orders: ordersResult.rows }); } finally { client.release(); } } catch (err) { console.error('Database error:', err); res.status(500).json({ error: 'Internal server error' }); } }); ``` **4. Keep-alive settings** You might want to add keep-alive to your pool configuration: ```javascript const pool = new Pool({ // your existing config keepAlive: true, keepAliveInitialDelayMillis: 10000, }); ``` **5. Consider using a connection pooler** If you're running at scale, tools like PgBouncer can help manage connections more efficiently. Try implementing these changes and monitor if the errors persist. Let me know if you need more help!
🔍 Analysis
Both models identify the missing `client.release()` and add `finally` blocks — but the depth of their analysis is dramatically different. GPT-4o immediately pinpoints the root cause: a connection pool leak. It explains the exact failure chain — pool exhaustion → timeout → terminated connection. It explains why the bug is intermittent (~30% maps to traffic spikes), why it works locally (single user never exhausts 10 connections), and why production databases kill the leaked connections (`idle_in_transaction_session_timeout`). Then it provides two fixes: the `try/finally` approach and the superior `pool.query()` approach that eliminates the problem by design. The pool configuration recommendations are practical and specific. Microsoft Copilot gives a generic checklist: check connection limits, check network issues, use pooling properly, add keep-alive, consider PgBouncer. It does add `client.release()` in a `finally` block, which fixes the immediate bug. But it buries the actual root cause among 5 equally-weighted suggestions, most of which aren't relevant. Network issues? PgBouncer? Keep-alive? These are shotgun troubleshooting — not debugging. It never explains *why* the bug is intermittent or why it works locally. For debugging, you need a model that reads the code, identifies the flaw, and explains the causal chain. GPT-4o does that. Copilot hands you a FAQ.

Run your own battle

Compare GPT-4o, Microsoft Copilot and more AI models side-by-side with any prompt — free.

Try NailedIt.ai →