⚔ AI Battle

Claude vs ChatGPT: Debug a Python Fibonacci Function

The Prompt
Debug this Python code and explain the bugs: def fibonacci(n): if n <= 0: return []; fib = [0, 1]; for i in range(2, n): fib.append(fib[i-1] + fib[i-2]); return fib. The function should also handle non-integer inputs gracefully.
Claude Opus 4.6 GPT-4o
## Bugs Identified: ### 1. Edge Case Bug: n = 1 When n = 1, the function returns [0, 1] (2 elements) instead of [0] (1 element). ### 2. No Input Validation The function doesn't handle non-integer inputs. ## Corrected Code: ```python def fibonacci(n): if not isinstance(n, int): raise TypeError(f"Input must be an integer, got {type(n).__name__}") if n < 0: raise ValueError("Input must be non-negative") if n == 0: return [] if n == 1: return [0] fib = [0, 1] for i in range(2, n): fib.append(fib[i-1] + fib[i-2]) return fib ``` Also provided an alternative 'graceful' version that converts inputs with try/except instead of raising errors, plus comprehensive test cases with expected output.
GPT-4o GPT-4o
### Bugs and Issues 1. Non-integer Input Handling: The function does not handle non-integer inputs. 2. Behavior for n = 1: The function incorrectly returns [0, 1] instead of [0]. 3. Missing Edge Case Handling: n = 0 works by coincidence, not by design. ### Improved Code: ```python def fibonacci(n): if not isinstance(n, int) or n < 0: raise ValueError("Input must be a non-negative integer") if n == 0: return [] if n == 1: return [0] fib = [0, 1] for i in range(2, n): fib.append(fib[i-1] + fib[i-2]) return fib ``` Added type check, explicit n=0 handling, and n=1 edge case.
🔍 Analysis
Both models correctly identified the same core bugs: the n=1 edge case returning [0, 1] instead of [0], and the missing input validation for non-integer types. Both produced working fixes. Claude went further by providing two versions — a strict version with separate TypeError and ValueError exceptions (following Python best practices for distinguishing error types), and a 'graceful' version using try/except for type coercion. Claude also included comprehensive test cases with expected output, making the response more immediately useful. GPT-4o's fix combined type and value checking into a single ValueError, which is simpler but less Pythonic. Its explanation was clear but lacked test cases. Claude wins on completeness (two approaches, test cases, expected output) and Python best practices (separate exception types).

Run your own battle

Compare Claude Opus 4.6, GPT-4o and more AI models side-by-side with any prompt — free.

Try NailedIt.ai →