Thu Apr 30 2026
And then it clicked
Guillaume Godet-Bar
The year was 2012.
A Java developer at the time, most of my day job involved designing and implementing DSL grammars, code editors, analyzers and compilers for System-on-Chip (SoC) designs, in Java. Multithreaded work dispatch, data indexing and processing, lock-free algorithms, and performance optimization in general were my bread and butter.
In the evening, binge-watching InfoQ presentations was an efficient way of discovering optimization techniques, as well as interesting uses of languages I was not familiar with. As I was regularly tasked with R&D work, surveying languages and their features was an implicit but essential part of my job. And so I had dived into Ruby, Erlang, Clojure, Groovy, Swift, to name a few, inspecting what was getting the attention of the wider developer community, analyzing paradigms that could one day make their way into a DSL design.
On one of these nights I remember reading something on the slide like "owned variable: let x:T = ..., borrowed variable: let x:&T = ... and boxed variable: let x:~T = ..." [1] and thinking "now that's something interesting".
That was my first contact with Rust. I read a few blog posts, ran some of the available tutorials, and came away with a strong intuition that Rust had a bright future as an alternative to C and C++ for building robust and high-performance applications [2]. Indeed, it seemed like most of the coding and debugging issues I was dealing with on a daily basis in Java had concise, efficient solutions in Rust. The tedious operations on collections could simply be replaced with higher-order functions with no execution cost. The focus on the memory layout of running code that drove the ownership model would greatly help reduce the footprint of my algorithms. The list goes on.[3]
Putting Rust to more serious work became an obsession, though I knew that I would only be able to test it on a low-stakes, personal project.
This is the story of how Rust eventually imposed itself as the answer to a thorny cross-platform packaging and deployment problem — and of how, despite my early conviction, I resisted the language for months before I finally understood the power of its paradigm.
On putting theory to practice
Finding a personal project came easily enough. For a few months I had investigated ways of simplifying the process of writing scenarios – mostly for TTRPGs – by relying on a custom Markdown extension. A scanning phase of the Markdown parse tree would create a graph of places, events and characters, and links between each other. This graph would then be dumped to JSON, to be used in a companion iPad application – interactive graphs to browse places, events and characters – or consolidated into a PDF. I had a working early prototype in Ruby [4]. This was fine for CLI-based work, which was then a deliberate design choice, but fell short when I considered processing the Markdown data directly in the iPad application. Crossing the gap to iOS would have required a full rewrite in C with cross-platform support, which I was very much not looking forward to.
Enter cargo and rustup's predecessor, multirust. Multiple toolchains coexisting and managed by a single tool. Single-line compilation to arm or x86_64 targets.
FFI integration at the cost of writing one .h file and adding #[no_mangle] and pub extern "C" to the signature of function declarations called by the host application. For someone used to Maven build pipelines and XCode project configurations, it felt too simple, like there was no way this could work just like that. It's a real testament to the talent and dedication of the members of the Rust community who built such well-designed tooling.
Implementing a proof of concept for a cross-platform pipeline took far less time than writing this blog post. I told myself that following up with the actual Markdown extension parser code would surely be no more than a mere formality.
It wasn't.
After a few informal brushes in tutorials and toy code, this was my first proper run-in with the Rust borrow checker.
And, oh, how we fought, how I struggled, and yet refused to read between the lines: there's a proper, idiomatic way of architecting your Rust code that won't easily tolerate approaches shoehorned in from other languages.
And here I was back then, as I can still see beginners do today, building heavy struct objects hauling references to any other object it might need, sprinkling Rcs and RefCells all over the place. Forcing object-oriented patterns that are not fit for Rust.
As I was building a graph that went through a couple of phases of data aggregation and transformation, readers may very well imagine the bloated mess it became.
I was then seriously considering dumping the project as a failed experiment. And then it clicked. The borrow checker wasn't punishing me for no reason. Instead of struggling endlessly against it, I had the choice of finally understanding its point: my code was working around the strengths of the language. My graph nodes didn't absolutely need to keep Rc'ed references to nodes from previous iterations, destructuring into primitive type indices and hash maps was a simpler, sounder, more... Rusty way. And a few iterations later the code started to breathe.
Elementary, my dear Watson
This aha moment is, I believe, a true rite of passage among most Rust developers, even today. It is also a mental framing at the heart of writing idiomatic Rust code: an understanding that the borrow checker is a precious ally on your path to mastering Rust. It relentlessly teaches how Rust wants to be written.
As a mix of object-oriented patterns and functional programming idioms, Rust is certainly powerful, but it rewards patience and an open mind. The mental model it teaches – for good memory management and layout – is a valuable asset that transfers to any other language.
Since my early experience, the Rust borrow checker – and clippy for that matter – has only become better, cleaner, its diagnostics more actionable. It was genuinely helpful when I finally started using Rust in professional contexts, circa 2019. Long before AI code assistants entered the picture, the Rust tooling was quietly helping its community gain confidence in writing proper Rust code. That tooling, and the discipline it builds, is part of why I keep reaching for Rust to build systems that cannot compromise on robustness, maintainability and performance.
Just as Rome wasn't built in a day, Rust didn't reach syntactic maturity until quite a few years after its inception in 2006. At the time boxed variables were indeed considered a core part of the language, before the feature was moved to the standard library in 2014 (through this PR). ↩︎
A domain where a language like Ada already had a proven record (and went even beyond) but failed to be widely adopted in the industry. ↩︎
Readers familiar with Java would rightfully argue that these are now features of the language, but again, this was 2012 and I was stuck with Java 6. ↩︎
An upcoming post on this blog will elaborate on why I think prototyping in a go-to scripting language is a bad idea. ↩︎