A Cube for April Christmas
On Scarcity, Abundance, and the Struggles Worth Having
I come from a place where the world feels much smaller than it actually is. Even though I grew up in the post-internet era, in my corner of the globe, the digital age felt more like a rumor than a reality.
A few years back, something curious happened: our community received donations through the Samaritan's Purse Operation Christmas Child drive. I only recently found out it was meant for Christmas—which, in hindsight, explains the red-and-green theme. Our "Christmas" arrived on a random day in April, probably thanks to the miracle of global logistics.
Each child got a shoebox filled with an odd mix of items: socks, cheap plastic toys, a coloring book or two. But one item showed up again and again, almost like a recurring gag—the Rubik's Cube. To us, it wasn't a festive gift; it was a diabolical block of colorful confusion. It practically dared us to understand it, and none of us had the faintest clue where to even begin.
This was Kadoma. The 10th largest city in Zimbabwe, but still lightyears away from the kind of access that made such a puzzle solvable. No public library. No home internet. In fact, back then, only around 11–14% of the country had even occasional access to the web—mostly in the capital or major towns. For most of us, YouTube tutorials or online guides might as well have been science fiction.
But I was a bored teenager, and I had one lifeline: an old laptop with offline encyclopedias. One day, I typed "Rubik's cube" into the search bar, hoping for a miracle.
What I got was... not quite that. The page that loaded was dense and cryptic. No illustrations. No step-by-step breakdowns. Just a strange set of lettered moves and pattern sequences. At the time, I didn't even know the word "algorithm." But I copied it all into my notebook—twice when I didn't understand it—and started twisting the cube, day after day.
For six months, I fumbled in the dark, trying to make sense of the logic. I wasn't following instructions so much as reverse-engineering them. And then one day, it happened. I solved it.
I remember holding that completed cube like it was some kind of trophy. I had cracked it. Through persistence, trial, and a lot of scribbled notes. It felt like I had proven something—not just to others, but to myself. That I could solve problems. That I could teach myself something hard.
Word spread. Kids would hand me their cubes on the street and ask me to "fix" them. Sometimes I had to think hard or refer back to my notes. But eventually, I could do it from memory. For a little while, the cube became something more than a toy. It became a symbol—of intelligence, of effort, of possibility in a place where those things weren't always rewarded.
Then one day, someone told me they had seen a video online of a guy solving the cube blindfolded in ten seconds. I watched it. And I never solved a cube again.
Now, I realize that cube wasn't just a puzzle—it was a lesson in how constraints shape thinking.
Scarcity vs. Abundance: The Two Faces of Creativity
That Rubik's Cube moment was my first real encounter with standardized problem-solving. In hindsight, it taught me something I only really started to understand as AI tools entered the picture: how the context around a problem changes the nature of thinking it demands.
Psychological research backs this up. In environments of scarcity, creativity often thrives. Constraints force you to improvise, make do, and invent. When resources are limited, people repurpose and rethink in surprisingly clever ways. (Study on scarcity and cognition).
By contrast, too much abundance—too many answers, tutorials, and examples—can sometimes dampen original thought. When every problem already has a polished, crowd-sourced solution, the mental itch to figure things out for yourself can fade. Why think deeply when the internet can do it faster? (Research on the "Google effect").
Yet, not all abundance is bad. Studies show that when used thoughtfully, resource-rich environments can enhance learning—especially when beginners are given worked examples to reduce cognitive load. In other words: shortcuts aren't the enemy. Mindless shortcuts are. (Worked examples research).
This brings me to a broader question: When should we rely on existing solutions, and when should we start from scratch?
First Principles vs. Ready-Made Solutions
Another way to frame this is through first-principles thinking versus ready-made solutions.
First-principles thinking breaks problems into fundamental truths and builds upward. It's slower, harder—but yields deeper understanding. In contrast, ready-made solutions—library functions, online answers, or AI-generated replies—are efficient. They help you skip the boring parts. And sometimes, that's exactly what you need.
But there's a cost. If you rely too much on shortcuts, you risk never learning the basics. Like solving a Rubik's Cube by following a video move-for-move without ever understanding why the moves work. You've replicated success, but you haven't earned it.
Sometimes, ready-made solutions—like using a library in programming or asking AI for a summary—are the practical choice. They let us offload routine tasks and focus on what matters. The key is knowing when to offload, and when to roll up your sleeves and do the work.
Blend both: borrow when it saves time but dig deep when understanding matters. The goal isn't to avoid help—it's to make sure the help doesn't replace thinking.
AI, Cognitive Offloading, and the Risk of Losing the Thread
Today, tools like ChatGPT and GitHub Copilot offer real cognitive offloading. They automate the grunt work and accelerate progress. In many ways, they're like the Rubik's Cube solution I never had—except now, they're available instantly, and for everything.
Used wisely, they enhance creativity. They let us stay "in flow," avoid burnout, and focus on the big ideas. But overuse can lead to something worse than laziness—it can lead to shallow thinking. The more we rely on AI to think for us, the less we practice thinking for ourselves.
That's the paradox. AI can deepen our work—or it can dull our edge. The difference lies in intentionality.
How to Keep Thinking in an Instant-Answer Age
This new landscape demands intentionality. Knowing when to offload and when to engage is a critical skill. Sometimes you want speed. Other times, you need to slow down and chew on a hard problem until it yields something deeper. Imposing constraints—on time, resources, or access—can feel frustrating. But it can also foster innovation.
For learners, that might mean resisting the urge to look up every answer and instead wrestling with it first. For professionals, it means treating AI outputs as drafts to be challenged, not truths to be accepted.
Ultimately, both scarcity and abundance have value. But neither is enough by itself. What matters is how we use them.
When I solved that Rubik's Cube, it wasn't just because I had the algorithm—but because I didn't have everything else. No videos. No diagrams. No shortcut. Just a poorly formatted web page, a secondhand notebook, and an itch to understand.
Scarcity forced me to become resourceful. It taught me that solving problems involves more than just finding answers. It's about staying curious—even when the solution is already out there.
I never solved the cube again. Not because I am bitter or defeated—but because, for me, the problem is already solved. It shifted from a meaningful challenge into a pure optimization game. I am not interested in being the fastest. I was interested in the journey of solving something from scratch—and I have already made that journey.
In a world full of answers, the real challenge isn't finding solutions—it's choosing which problems demand our own minds. So next time you reach for ChatGPT, ask yourself: Should I let it think for me, or should I think for myself?