October 4th, 2024

Magic Isn't Real

The article reflects on a software developer's journey from confusion to clarity in programming, emphasizing the importance of experience, continuous learning, and tackling challenges to gain deeper understanding.

Read original articleLink Icon
Magic Isn't Real

The article discusses the journey of a software developer who reflects on the initial feelings of confusion and intimidation when learning programming concepts. The author compares this experience to the feeling of standing before a complex building, unsure how the tools at hand could create such structures. Over time, as developers gain experience and knowledge, they begin to recognize patterns and demystify the "magic" behind programming. The author emphasizes that many aspects of programming that once seemed daunting become clearer with context and understanding. The piece highlights the importance of continuous learning and exploring different areas of technology, even those not directly related to one's daily work. The author shares a personal experience of implementing a feature in Go, illustrating how the process of tackling challenges can lead to valuable insights, even if the outcome is not immediately useful. Ultimately, the message is that the perceived complexity in programming is often a lack of context, and with persistence and curiosity, developers can gain a deeper understanding of their craft.

- The feeling of confusion in programming is common among beginners.

- Gaining experience helps demystify complex programming concepts.

- Continuous learning and exploration are crucial for developers.

- Tackling challenges can lead to valuable insights and understanding.

- The perceived complexity often stems from a lack of context.

Link Icon 19 comments
By @tehmillhouse - 4 months
When understanding a new "magic", there's this beautiful moment when you grok it, and the abstraction poofs away.

It's when you take apart a mechanical clock and keep looking for the time-keeping part, until you figure out that there isn't a time-keeping part in there, it's just gears and a spring.

It's when you learn about integrated circuits and full-adders, and keep trying to understand how a bunch of transistors can do Mathematics, until you figure out that there isn't a mathematics-doing part in there, it's just circuits and wires, arranged in a way that makes the voltages come out right.

It's when your understanding of the top-down structure snaps together with the bottom-up mechanics of the building blocks. There's no space left for the ghost in the machine to haunt, and you go "Oh. huh". I live for that moment.

By @wruza - 4 months
One interesting thing of (accidentally) starting with assembly is that you mostly can’t see magic at all, but instead you see learning magicians explaining computers in all sorts of funny ways.

My first “pc” was a clone of zx spectrum and all I had was a built-in BASIC and then some assembler on a cassette. Both went with “books” on how to use them, together with all of the unlimited time you have when you’re a kid.

This transferred to my first PC and eventually I learned how FAT, DOS, BIOS work, how to make a TSR and fool around B8000/A0000, first steps with 386+. It also helped that my granddad was an impulse electronics engineer and taught me how actual gates work and how computers count, sum and select numbers. He also had access to multiple books on hardware. I knew it all down to the silicon.

Other people had all sorts of magical ideas on how computers work. Special “hidden system areas”, “graphics card does X”, “computers multiply by addition”, etc etc. It’s a human thing that if we don’t understand something, our mind tries to yadda yadda it.

And the more you yadda yadda, the less chances it leaves that you’ll actually learn it. I tend to fight with these half-baked autogenerated explanations and try to dig down to how it really works. For no particular reason, that’s just what I like to do. It leaves a mark on how you work though.

By @highfrequency - 4 months
> I’m sure I’m not alone, in that each time you pull the curtain off a piece of ‘magic’, you have the same thought: Oooooh yeah. I mean, well duh.. how else would you do that? I can't believe I couldn't see it.

This is the great paradox of good ideas. The best ideas are obvious, but only in retrospect. You would be very unlikely to encounter the idea by randomly fumbling around, yet it feels so simple and obvious (and often even easy to implement!) after someone else points it out and you mull it over. Usually it requires looking in a different dimension than the one you were focused on rather than looking further along familiar dimensions, which is our default behavior.

In this way there’s a parallel to modern machine learning, where backprop on gigantic models allows us to find very short paths in very high dimensions, rather than finding very long circuitous paths in low dimensions. It turns out this also solves the overfitting problem, in the same way that “retrospectively obvious” is a good filter for ideas.

By @shermantanktop - 4 months
So many frameworks and dev-oriented features are sold with the claim that domain X is super complicated and full of esoteric useless trivia, so instead someone has put a clean facade on that which simplifies everything and protects the hapless dev from having to learn. Which is nice, it’s a tidy abstraction, we all have other things to do, now we go fast.

Except…the dev has to learn the new API, which we can call domain Y, and which can become quite complicated and hard to map conceptually to domain X (e.g. React events vs browser rendering, or Java GC vs real memory).

And when the cool facade doesn’t quite work for some exotic use case, now what? The dev gets to learn the original domain X anyway, plus how X and Y interact. In the worst case they have rewrite all their Y-using code.

Great abstractions are good magic; bad abstractions are evil magic. And yet good vs evil is often hard to tell apart when you pick up the problem.

By @mistercow - 4 months
Sometimes a remnant of the magic remains in the math, though, even after you understand why the math must work. The Burrows-Wheeler transform used in bzip2 is an example of this for me. I get why it works, but it just feels too damned convenient, almost as if this is one of the universe’s APIs, and you don’t get to look into the source code of this one to find out why such a simple maneuver is so effective on exactly the sort of data we like to compress.
By @anymouse123456 - 4 months
TFW the real wizards are the ones working in the strata just below the one I understand...

After nearly 30 years of persistent excavation, I'm finally at bedrock with registers, some assembly and embedded C.

Lifting my head up to find the mess we've created for graphical application development is admittedly disheartening, but also thrilling as there are ways to make it better.

By @mst - 4 months
Corollary: Sufficiently well encapsulated magic is indistinguishable from technology.
By @bananaflag - 4 months
The only two examples of real "magic" I've encountered (would be interested in more):

1) You cannot do preemptive multitasking except by having a timer interrupt (okay, maybe one can also allow emulation).

2) Quantum key distribution (starting with BB84) depends crucially on the fact that the world is not classical.

But in general I agree with the article, it's more or less why I did not become a programmer.

By @nuancebydefault - 4 months
You will indeed find the _magic_ to be no such thing when you dig deep down in the abstactions. Each part makes sense and is comprehensible.

However the magic is what emerges when the parts come together, the system becomes more than the sum of its parts.

Think of a car or a cellphone, an llm, it feels magical what it does, while the elementary parts do not consist of magic.

By @Joker_vD - 4 months
On the topic of how that implementation comptime in Go works: I've toyed with a similar idea for implementing constant folding/beta reduction — generate a temporary file with the relevant subset of definitions, insert the constant expression/function call you'd like to evaluate and... compile and run this file. It may not be the most performant thing to do but at least you get the correct semantics without writing a separate constexpr interpreter.
By @GrantMoyer - 4 months
Sufficiently understood magic is by definition technology.
By @tayloramurphy - 4 months
Charles Petzold's book "Code" was the book that revealed the magic behind computers for me. While I don't fully understand everything that happens with a computer, there's a confidence that I could if I needed to. It made tough problems feel solvable in a way I hadn't felt in other disciplines.
By @gradientsrneat - 4 months
One of the cool things about being a kid is I didn't know how tf anything works. So I thought long and hard about how things work and just randomly guessing. When it turns out later there's a clear answer, it's very satisfying. But knowledge can also weigh down and discourage curiosity.
By @niemandhier - 4 months
Going the other direction, towards higher levels of abstraction tends to strip away the magic too!

I recently write some robotics code using ROS, taking a step back I looked a the result and thought: Actually that is not much different conceptually from running K8s deployments coupled by Kafka.

By @dvh - 4 months
In recent video about reverse game of life Alpha Phoenix hinted that sat solvers are magic: https://youtu.be/g8pjrVbdafY
By @zaphar - 4 months
Magic isn't real. But unnecessary magic can be annoying and sometimes doesn't pull it's weight.
By @metalman - 4 months
interesting how a click bait title works just like magic and even more so gets people debating a so so premise in mystical terms
By @abdellah123 - 4 months
Birds aren't real