"AI is only as smart as you"
So, I started using AI editors over the past year, going from Cursor to Codex and Claude, and now just Claude. Through this journey, I noticed something important: they’re only as smart as the person using them.
An AI can generate code, suggest solutions, and explain concepts, but it can’t tell you if those suggestions are actually good for your specific context. It doesn’t know your constraints, your architecture, or the subtle tradeoffs that matter in your project. Without that judgment, you’re just accepting whatever it gives you.
This realization changed how I approach learning. If I want to actually benefit from AI tools instead of being dependent on them, I need to be able to evaluate their output critically. That means understanding the fundamentals deeply enough to catch mistakes, recognize when a “correct” solution is actually wrong for my use case, and know when to push back. Learning to push back on AI suggestions and articulate why was something I struggled with on the job, but it’s become an essential skill.
As a result, I’ve started reading more: diving into different codebases, going through all of the React Native and Expo docs, exploring concepts like system design and caching more thoroughly, and building a stronger foundation. Not to compete with AI, but to work with it effectively. The goal isn’t to know more than the AI, it’s to know enough to guide it, verify it, and ultimately make better decisions about what to build and how to build it.
I’ll give AI credit for one thing though: it’s encouraged me to grow and learn outside of my comfort zone in an encouraging, non-judgmental environment.