I like Odin and hope for it to gain more momentum.
I have an important metric for new "systems" languages: does the language allow NULL for it's commonly used pointer type. Rust by default does not (References may _not_ be NULL). Here is where I think Odin makes a mistake.
In the linked blog post Odin mentions ^os.File which means pointer to File (somewhat like *FILE in C). Theoretically the pointer can be NULL. In practice, maybe I would need to check for NULL or maybe I would not (I would have to scan the Odin function's documentation to see what the contract is).
In Rust, if a function returns &File or &mut File or Box<File> etc. I know that it will never be NULL.
So Odin repeats the famous "billion-dollar mistake" (Tony Hoare). Zig in comparison is bit more fussy about NULL in pointers so it wins my vote.
Currently this is my biggest complaint about Odin. While Odin packages a lot of power programming idioms (and feels a bit higher level and ergonomic than Zig) it makes the same mistake that Golang, C and others make regarding allowing NULL in the default pointer type.
One thing I think worth considering for systems languages on this point: if you don't want to solve every expressiveness issue downstream of Result/Option/etc from the outset, look at Swift, which has nullable types.
MyObject can't be null. MyObject? can be null. Handling nullability as a special thing might help with the billion-dollar mistake without generating pressure to have a fully fleshed out ADT solution and everything downstream of that.
To people who would dismiss ADTs as a hard problem in terms of ergonomics: Rust makes it less miserable thanks to things like the question-mark shorthand and a bazillion trait methods. Languages like Haskell solve it with a monads + do syntax + operating overload galore. Languages like Scala _don't_ solve it for Result/Option in any fun way and thus are miserable on this point IMHO
I like to think about how many problems a feature solves to judge whether it's "worth it". I believe that the Sum types solve enough different problems that they're worth it, whereas nullability solves only one problem (the C-style or Java-style null object) the Sum types can solve that with Option<T> and also provide error handling with Result<T, Err> and control flow with ControlFlow<Continue, Break> among others so that's already a better deal.
Nullability is a good retro-fit, like Java's type erased generics, or the DSL technology to cram a reasonable short-distance network protocol onto the existing copper lines for telephones. But in the same way that you probably wouldn't start with type erased generics, or build a new city with copper telephone cables, nullability isn't worth it for a new language IMO.
- `?T` and `T!E` as type declaration syntax that desugars to them;
- and `.?` and `.!` operators so chains like `foo()?.bar()!.baz()` can be written and all the relevant possible return branches are inserted without a fuss.
Having `Option` and `Result` be simply normal types (and not special-casing "nullable") has benefits that are... obvious, I'd say. They're just _normal_. Not being special cases is great. Then, having syntactic sugars to make the very, _very_ common cases be easy to describe is just a huge win that makes correct typing more accessible to many more people by simply minimizing keystrokes.
The type declaration sugar is perhaps merely nice to have, but I think it really does change the way the average programmer is willing to write. The chaining operators, though... I would say I borderline can't live without those, anymore.
Chaining operators can change the SLOC count of some functions by as much as... say, 75%, if we consider a language like Go with it's infamous "if err not nil" clause that is mandated to spread across three lines.
I mean, yeah, type erasure does give parametricity, but, you can instead design your language so that you monomorphize but insist on parametricity anyway. If you write stable Rust your implementations get monomorphized but you aren't allowed to specialize them - the stable language doesn't provide a way to write two distinct versions of the polymorphic function.
And if you only regard parametricity as valuable rather than essential then you can choose to relax that and say OK, you're allowed to specialize but if you do then you're no longer parametric and the resulting lovely consequences go away, leaving it to the programmers to decide whether parametricity is worth it here.
I don't understand your first paragraph. Monomorphization and parametricity are not in conflict; the compiler has access to information that the language may hide from the programmer. As an existance proof, MLTon monomorphizes arrays while Standard ML is very definitely parametric: http://www.mlton.org/Features
I agree that maintaining parametricity or not is a design decision. However, recent languages that break it (e.g. Zig) don't seem to understand what they're doing in this regard. At least I've never seen a design justification for this, but I have seen criticism of their approach. Given that type classes and their ilk (implicit parameters; modular implicits) give the benefits of ad-hoc polymorphism while mantaining parametricity, and are well established enough to the point that Java is considering adding them (https://www.youtube.com/watch?v=Gz7Or9C0TpM), I don't see any compelling reason to drop parametricity.
My point was that you don't need to erase types to get parametricity. It may be that my terminology is faulty, and that in fact what Rust is doing here does constitute "erasing" the types, in that case what describes the distinction between say a Rust function which is polymorphic over a function to be invoked, and a Rust function which merely takes a function pointer as a parameter and then invokes it ? I would say the latter is type erased.
I personally don't enjoy the MyObject? typing, because it leads to edge cases where you'd like to have MyObject??, but it's indistinguishable from MyObject?.
E.g. if you have a list finding function that returns X?, then if you give it a list of MyObject?, you don't know if you found a null element or if you found nothing.
It's still obviously way better than having all object types include the null value.
> E.g. if you have a list finding function that returns X?, then if you give it a list of MyObject?, you don't know if you found a null element or if you found nothing.
This is a problem with the signature of the function in the first place. If it's:
where the _result_ is responsible for the return value wrapping. Making this not copy is a more advanced exercise that is bordering on impossible (safely) in C++, but Rust and newer languages have no excuse for it
inline fun <T> Sequence<T>.last(predicate: (T) -> Boolean): T {
var last: T? = null
var found = false
for (element in this) {
if (predicate(element)) {
last = element
found = true
}
}
if (!found) throw NoSuchElementException("Sequence contains no element matching the predicate.")
@Suppress("UNCHECKED_CAST")
return last as T
}
A proper option type like Swift's or Rust's cleans up this function nicely.
Your example produces very distinguishable results. e.g. if Array.first finds a nil value it returns Optional<Type?>.some(.none), and if it doesn't find any value it returns Optional<Type?>.none
The two are not equal, and only the second one evaluates to true when compared to a naked nil.
This is Swift, where Type? is syntax sugar for Optional<Type>. Swift's Optional is a standard sum type, with a lot of syntax sugar and compiler niceties to make common cases easier and nicer to work with.
Well, in a language with nullable reference types, you could use something like
fn find<T>(self: List<T>) -> (T, bool)
to express what you want.
But exactly like Go's error handling via (fake) unnamed tuple, it's very much error-prone (and return value might contain absurd values like `(someInstanceOfT, false)`). So yeah, I also prefer language w/ ADT which solves it via sum-type rather than being stuck with product-type forever.
I like go’s approach on having default value, which for struct is nil. I don’t think I’ve ever cared between null result and no result, as they’re semantically the same thing (what I’m looking for doesn’t exist)
The Scala solution is the same as Haskell. for comprehensions are the same thing as do notation. The future is probably effect systems, so writing direct style code instead of using monads.
It's interesting that effect system-ish ideas are in Zig and Odin as well. Odin has "context". There was a blog post saying it's basically for passing around a memory allocator (IIRC), which I think is a failure of imagination. Zig's new IO model is essentially pass around the IO implementation. Both capture some of the core ideas of effect systems, without the type system work that make effect systems extensible and more pleasant to use.
I think the notion that "null" is a billion dollar mistake is well overblown. NULL/nil is just one of many invalid memory addresses, and in practice most of invalid memory address are not NULL. This is related to the drunkard’s search principle (a drunk man looks for his lost keys at night under a lamppost because he can see in that area). I have found that null pointers are usually very easy to find and fix, especially since most platforms reserve the first page of (virtual) memory to check for these errors.
In theory, NULL is still a perfectly valid memory address it is just that we have decided on the convention that NULL is useful for marking a pointer as unset.
Many languages (including Odin) now have support for maybe/option types or nullable types (monads), however I am still not a huge fan of them in practice as I rarely require them in systems-level programming. I know very well this is a "controversial" opinion, but systems-level programming languages deal with memory all the time and can easily get into "unsafe" states on purpose. Restricting this can actually make things like custom allocators very difficult to implement, along with other things.
n.b. Odin's `Maybe(^T)` is identical to `Option<&T>` in Rust in terms of semantics and optimizations.
see, this seems like something that's nice to actually put into the types; a Ptr<Foo> is a real pointer that all the normal optimizations can be done to, but cannot be null or otherwise invalid, and UnsafePtr makes the compiler keep its distance and allows whatever tricks you want.
Odin’s design is informed by simplicity, performance and joy and I hope it stays that way. Maybe it needs to stay a niche language under one person’s control in order to do so since many people can’t help but try to substitute their own values when touring through a different language.
I wonder if some day we'll look back differently on the "billion-dollar mistake" thing. The key problem with null references is that it forces you to either check any given reference to see if it's null if you don't already know, or you would have to have a contract that it can't be null. Not having null references really does solve that problem, but still in every day programs you often wind up with situations where you actually can know from the outside that some function will return a non-empty value, but the function itself is unable to make that guarantee in a way that the compiler can enforce it; in those cases, you have no choice but to face the same dilemma. In Rust this situation plays out with `unwrap()`, which in practice most reasonably-sized codebases will end up with some. You could always forbid it, but this is only somewhat of an improvement because in a lot of cases there also isn't anything logical to do once that invariant hasn't held. (Though for critical production workloads, it is probably a good idea to try to find something else to do other than let the program entirely crash in this event, even if it's still potentially an emergency.)
In other words: after all this time, I feel that Tony Hoare framing null references as the billion-dollar mistake may be overselling it at least a little. Making references not nullable by default is an improvement, but the same problem still plays out so as long as you ever have a situation where the type system is insufficient to be able to guarantee the presence of a value you "know" must be there. (And even with formal specifications/proofs, I am not sure we'll ever get to the point where is always feasible to prove.) The only real question is how much of the problem is solved by not having null references, and I think it's less than people acknowledge.
(edit: Of course, it might actually be possible to quantify this, but I wasn't able to find publicly-available data. If any organization were positioned to be able to, I reckon it would probably be Uber, since they've developed and deployed both NullAway (Java) and NilAway (Go). But sadly, I don't think they've actually published any information on the number of NPEs/panics before and after. My guess is that it's split: it probably did help some services significantly reduce production issues, but I bet it's even better at preventing the kinds of bugs that are likely to get caught pre-production even earlier.)
I think Hoare is bang on because we know the only similar values in many languages are also problematic even though they're not related to memory.
The NaNs are, as their name indicates, not numbers. So the fact this 32-bit floating point value parameter might be NaN, which isn't even a number, is as unhelpful as finding that the Goose you were passed as a parameter is null (ie not actually a Goose at all)
There's a good chance you've run into at least one bug where oops, that's NaN and now the NaN has spread and everything is ruined.
The IEEE NaNs are baked into the hardware everybody uses, so we'll find it harder to break away from this situation than for the Billion Dollar Mistake, but it's clearly not a coincidence that this type problem occurs for other types, so I'd say Hoare was right on the money and that we're finally moving in the correct direction.
What I'm saying is, I disagree that "we know" these things. We know that there are bugs that can be statically prevented by having non-nullable types to enforce contracts, but that doesn't in itself make null the actual source of the problem.
A language with non-nullability-by-default in its reference types is no worse than a language with no null. I say this because, again, there will always be situations where you may or may not have a value. For example, grabbing the first item in a list; the list may be empty. Even if you "know" the list contains at least one item, the compiler does not. Even if you check the invariant to ensure that it is true, the case where it is false may be too broken to handle and thus crashing really is the only reasonable thing to do. By the time the type system has reached its limits, you're already boned, as it can't statically prevent the problem. It doesn't matter if this is a nullable reference or if its an Option type.
Because of that, we're not really comparing languages that have null vs languages that don't. We're comparing languages that have references that can be non-nullable (or functionally equivalent: references that can't be null, but optional wrapper types) versus languages that have references that are always nullable. "Always nullable" is so plainly obviously worse that it doesn't warrant any further justification, but the question isn't whether or not it's worse, it's how much worse.
Maybe not a billion dollars worse after all.
P.S.: NaN is very much the same. It's easy to assign blame to NaN, and NaN can indeed cause problems that wouldn't have existed without it. However, if we had signalling NaNs by default everywhere, I strongly suspect that we would still curse NaN, possibly even worse. The problem isn't really NaN. It's the thing that makes NaN necessary to begin with. I'm not defending null as in trying to suggest that it isn't involved in causing problems, instead I'm suggesting that the reasons why we still use null are the true root issue. You really do fix some problems by killing null, but the true root issue still exists even after.
Odin offers a Maybe(T) type which might satisfy your itch. It's sort of a compromise. Odin uses multiple-returns with a boolean "ok" value for binary failure-detection. There is actually quite a lot of syntax support for these "optional-ok" situations in Odin, and that's plenty for me. I appreciate the simplicity of handling these things as plain values. I see an argument for moving some of this into the type-system (using Maybe) when it comes to package/API boundaries, but in practice I haven't chosen to use it in Odin.
Maybe(T) would be for my own internal code. I would need to wrap/unwrap Maybe at all interfaces with external code.
In my view a huge value addition from plain C to Zig/Rust has been eliminating NULL pointer possibility in default pointer type. Odin makes the same mistake as Golang did. It's not excusable IMHO in such a new language.
Both Odin and Go have the "zero is default" choice. Every type must have a default and that's what zero signifies for that type. In practice some types shouldn't have such a default, so in these languages that zero state becomes a sentinel value - a value notionally of this type but in fact invalid, just like Hoare's NULL pointer, which means anywhere you didn't check for it, you mustn't assume you have a valid value of that type. Sometimes it is named "null" but even if not it's the same problem.
Even ignoring the practical consequences, this means the programmer probably doesn't understand what their code does, because there are unstated assumptions all over the codebase because their type system doesn't do a good job of writing down what was meant. Almost might as well use B (which doesn't have types).
I've been actively toying with Odin in past few days. As a Gopher, the syntax is partially familiar. But as it is a lower level language wiht manual-ish memory management, simple things require much more code to write and a ton of typecasting. Lack of any kind of OOP-ism, like inheritance(bad), encapsulation(ok), or methods(nobrainer), is very spartan and unpleasant in 2025, but that's just a personal preference. I don't think I ever used fully procedural language in my entire life. It requires a complete rewiring on one's brain. Which I'd say is a huge obstacle for most programmers, definitely from the younger crowd. For low-level guys, this is quite a nice and powerful tool. For everyone else, it's a bit of a head ache(even Zig has methods and interfaces). The language still lacks basic things like SQL drivers, solid HTTPS stack, websockets, and essentially anything related to web and networking(which has the bare bones functionality). As a Gopher, I am biased, but the web rules the world, so it is objective complaint. In the end, this is a solid language with great support for 2D and 3D graphics and advanced mathematics, which naturally makes it a very niche language for making games or anything to do with visual programming. Definitely try it out.
PS: I just read a funny comment on YT video: "Odin feels like a DSL for writing games masquerading as a systems language."
This gets you dynamic dispatch, roughly via the C++ route (inline vtables in implementing types). This means you must always pay for this on the types which provide it, even if you rarely use the feature, removing those vtables makes it unavailable everywhere.
A lot of programmers these days want static dispatch for its ergonomic value and Odin doesn't help you there. Odin thinks we should suck it up and write alligator_lay_egg_on(gator, egg, location) not gator.lay_egg_on(egg, location)
If we decide we'd prefer to type gator->lay_egg_on(egg, location) then Odin charges us for a vtable in our Alligator type, which we didn't need or want, and then we incur a stall every time we call that because we need to go via the vtable.
Oh, nice. I have to admit I'm not all that familiar with Odin, because I've been all-in on Zig for a long time. I've been meaning to try out a game dev project in Odin for a while though, but haven't had the time.
I am using Odin to write a video game.... why would I want tn HTTPS stack, SQL Drivers, Websockets or any of that? Maybe eventually I might need some websockets if I want multiplayer. But I can also just make bindings to a C library so no real issue there.
> Is Odin “just” a language for game development? #
> No. It is a common misconception that Odin is “just” for game development (“gamedev”) due to the numerous vendor packages that could be used in the aid of the development of a game. However, gamedev is pretty much the most wide domain possible where you will do virtually every area of programming possible.
>
> Odin is a general purpose language; is capable of being used in numerous different areas from application development, servers, graphics, games, kernels, CLI/TUIs, etc.
>
> There are many aspects of Odin which do make working with 2D and 3D related operations (which are common in gamedev) much nicer than other languages, especially Odin’s array programming, swizzling, #soa data types, quaternions and matrices, and so much more niceties which other languages do not offer out-of-the-box.
Ginger Bill vehemently refuses this notion and tries to fight in every podcast, to "sell" Odin as general purpose low level language. But he is failing because of my points and your claim just proves it yet again that Odin has profiled itself as language for games when in reality that was never the intention of Bill. There is nothing wrong with that, it's just the perception among programmers.
It's also in the Handmade crowd, and for a lot of people that's intimately connected to video games. I actually think Handmade's approach is helpful for games in a way it isn't for a lot of other software.
Games are art. The key thing is that you have to actually make it. Handmade encourages people who might make some art to actually make something rather than being daunted by the skill needed for a very sophisticated technology. Handmade is like telling a would-be photographer "You already have a camera on your phone. Point it at things and take pictures of them" rather than "Choose your subject, then you will need to purchase either an SLR or maybe a larger camera, and suitable lenses and a subscription for Photoshop and then take a college course in photo composition"
I don't want to use a text editor made by someone who has no idea what they're doing and learned about rope types last week. A dozen handmade text editors, most not as good as pico, are of no value to anybody.
But I do want to play video games by people who have no idea what they're doing. That's what Blue Prince is, for example. A dozen handmade video games means a dozen chances for an entirely unprecedented new game. It'll be rough around the edges, but novelty is worth a lot.
i'm kinda glad it's lacking typical webdev stuff at the moment. if nothing else for developers focus. its absolutely excellent for game development. i have written 2 complete games in odin and working on a third. all using just vendor raylib and absolutely flying. build time, language server, debug cycles. i complete entire features every session, very productive language. i look forward to its maturity
I think Odin should market itself for aforementioned games and graphics. Otherwise it will become very niche language. Even now, I think there is only about 5k Odin repositories on github while it is essentially a complete language. Contrast it with Zig, which is still evolving and has breaking changes, being still at 0.x without clear sight of 1.0, and it has over 27k repositories and big projects like Ghostty, Bunt or Tiger beetle are written in it.
Once Jonathan Blow's Jai comes out next year, the language that inspired conception of both of these, Odin will likely have no chance competing on the marketing side of things with programmers and will be taken over by Jai, and Zig in a large extent as well. So the future of the language might not be as solid as it might seem and it might end up just as an internal tool for JangaFX, which is how it originated.
Having the "web stuff" can attract literally millions of developers whom can elevate the language into more stable and broadly used language. More documentation would become available, libraries, youtube videos, internet presence in general.
Did Jon state that he intends to release as open source? I am not sure he is the guy to the stressful route. If it all he would probably go long release cycles without considering public feedback too much.
He also stated recently that he doesn't care too much about language design at a syntax level, or better said it's not his top focus as the overarching concepts are more important to him.
I think there is a chance that people may have a hard time to adopt to the language. His strong focus on gamedev will further cut down the audience. It will certainly draw a lot of attention but a massive adoption is highly questionable.
He said that for some period, the compiler will not be open source, the language will. Whether the compiler will become OS or not, I am not sure. But I think he just wants to get the spec done beforehand. I think he mentioned that the compiler code might be purchased via paid licence? I am not certain but as I am not a AAA studio, I do not care either way.
tbf there was a general systems language renaissance circa 2015, following Mozilla sponsoring Rust in 2009 and its impending 1.0 in 2015. Jai, Zig, and Odin were all contemporaries in that wave.
The language is in closed beta, there isn't exhaustive details available. You can see interviews and some details on YT if you look up Jon(athan) Blow with suitable topics.
Popularity is not required for a language to be "successful". An argument can be made that popularity can bring fragmentation, a lack of focus, security and usability issues via poor quality code and outdated information, and overwhelm core developers. Just take a look at JavaScript and Python ecosystems. Whereas less popular languages like Nim, Zig, Odin, etc., thrive within their niches.
This is a delight to read. I've been doing a survey of languages over the last several days, and Odin is one of the more interesting ones... but looking at the OS and FS related parts of the standard library made me back away at high velocity. They seemed like litanies of generated code that simply describe every quirk of every platform: not an abstraction at all. And while I do want those levels to be _available_, I also don't want to be dragged down there in every program.
Delighted to see more work will be focused there in the future.
Anyone have a good comparison of Odin vs C/C++/Rust/Zig/Hare/etc? I'm particularly interested in how simple it is to implement a compiler in the given language.
I wrote a simple benchmark comparing my custom dynamic array implementation in Dlang with dynamic arrays in other languages. The test appends and then removes 100,000,000 integers on a Ryzen 3 2200G with 16 GB RAM. This is just a rough cross-language benchmark and not something serious:
Appending and removing 100000000 items...
Testing: ./app_d
real 0.16
user 0.03
sys 0.12
Testing: ./app_zig
real 0.18
user 0.05
sys 0.13
Testing: ./app_odin
real 0.27
user 0.10
sys 0.16
Odin is more similar to Zig or C, very similar to Zig I think, but less boiler plate and more similar to Go syntactically. I have also found Odin to be a bit more friendly than Zig to get compiling and running, less required safety. There is far fewer features so I wouldn't really compare it to C++ or Rust. I am not familiar with Hare.
Don't have a comparison, but I've written toy languages in a few of them.
I only really feel confident to talk about Rust, Rust stands out when it comes to parsing (functional bros unite), but does suffer when interpreting because of unsafe memory access - although for a compiler that shouldn't be an issue.
Odin is great, but I feel like it doesn't have enough syntax sugar to make languages easy to work on - that being said you can achieve most of what you want in a mostly comparable way if you're willing to write more ugly code. In this way it's very similar to C.
Odin has dynamic arrays, dynamic hash tables and generics so imho it’s far from C and closer to D and Go, except for not having GC. It occupies the space between D/Go and C I would say.
Odin claims to be pragmatic (what language doesn't lol) but "All procedures that returned allocated memory will require an explicit allocator to be passed". Charitably, is this aimed at c/zig heads?
I'm guessing it's aimed at game development since Vulkan has a similar pattern in every function call (although optional, the driver does it's own allocation if you pass null).
As another commenter wrote "how do you allocate memory without an allocator?"
Even `malloc` has overhead.
> Wouldn't dynamic scope be better?
Dynamic scope would likely be heavier than what Odin has, since it'd require the language itself to keep track of this - and to an extent Odin does do this already with `context.allocator`, it just provides an escape hatch when you need something allocated in a specific way.
Then again, Odin is not a high level scripting language like Python or JavaScript - even the most bloated abstractions in Odin will run like smooth butter compared to those languages. When comparing to C/Rust/Zig, yeah fair, we'll need to bring out the benchmarks.
> and to an extent Odin does do this already with `context.allocator`
It has a temporary allocator as well, which could track memory leaks. Not so much anymore though, IIRC.
> As another commenter wrote "how do you allocate memory without an allocator?
I would like to point out that this is basic knowledge. At first I was wondering if I have gone insane and it really is not the case anymore or something.
> As another commenter wrote "how do you allocate memory without an allocator?"
You call these things something other than an "allocating" and "allocators". Seriously, few people would consider adding a value to a hashmap an intentionally allocational activity, but it is. Same with adding an element to a vector, or any of the dependent actions on it.
Yep exactly, at it's simplest all an allocator is doing is keeping track of which memory areas are owned and their addresses. In a sense even the stack pointer is a kind of allocator (you can assign the memory pointed to by the stack pointer and then increment it for the next use)
For "adding an element to a vector" it's actually not necessarily what you meant here and in some contexts it makes sense to be explicit about whether to allocate.
Rust's Vec::push may grow the Vec, so it has amortized O(1) and might allocate.
However Vec::push_within_capacity never grows the Vec, it is unamortized O(1) and never allocates. If our value wouldn't fit we get the value back instead.
The usual thing for languages is to provide a global allocator. That's what C's malloc is doing for example. We're not asked to specify the allocator, it's provided by the language runtime so it's implied everywhere.
In standalone C, or Rust's no_std, there is no allocator provided, but most people aren't writing bare metal software.
Language explorers looking for lower level languages like this may also want to take a peek at the V language. https://vlang.io/
I won't say with confidence either is better than the other; but I think both are worth a look.
Odin (iiuc) always makes you manage memory; Vlang permits you to, but does also have linking to the Boehm GC that it will generate for you in most cases.
Vlang and Odin in terms of syntax and legibility goals... well. I suggest if you're interested, just quick look will say more than I can. :)
I have an important metric for new "systems" languages: does the language allow NULL for it's commonly used pointer type. Rust by default does not (References may _not_ be NULL). Here is where I think Odin makes a mistake.
In the linked blog post Odin mentions ^os.File which means pointer to File (somewhat like *FILE in C). Theoretically the pointer can be NULL. In practice, maybe I would need to check for NULL or maybe I would not (I would have to scan the Odin function's documentation to see what the contract is).
In Rust, if a function returns &File or &mut File or Box<File> etc. I know that it will never be NULL.
So Odin repeats the famous "billion-dollar mistake" (Tony Hoare). Zig in comparison is bit more fussy about NULL in pointers so it wins my vote.
Currently this is my biggest complaint about Odin. While Odin packages a lot of power programming idioms (and feels a bit higher level and ergonomic than Zig) it makes the same mistake that Golang, C and others make regarding allowing NULL in the default pointer type.
reply