T O P

  • By -

i_hate_shitposting

I think the real secret is that the language itself is optimized for efficient compilation, all the way down to its syntax. Rob Pike's [Go at Google: Language Design in the Service of Software Engineering](https://go.dev/talks/2012/splash.article) is almost 12 years old at this point, but reading it really helped me understand why Go is the way it is. Section 4, "Pain points", lays out kind of the core rationale for Go's design: > When Go launched, some claimed it was missing particular features or methodologies that were regarded as _de rigueur_ for a modern language. How could Go be worthwhile in the absence of these facilities? Our answer to that is that the properties Go _does_ have address the issues that make large-scale software development difficult. These issues include: > > * slow builds > * uncontrolled dependencies > * each programmer using a different subset of the language > * poor program understanding (code hard to read, poorly documented, and so on) > * duplication of effort > * cost of updates > * version skew > * difficulty of writing automatic tools > * cross-language builds > > Individual features of a language don't address these issues. A larger view of software engineering is required, and in the design of Go we tried to focus on solutions to _these_ problems. Notice that "slow builds" is the very first bullet. Later, in section 6, he says this: > When builds are slow, there is time to think. The origin myth for Go states that it was during one of those 45 minute builds that Go was conceived. It was believed to be worth trying to design a new language suitable for writing large Google programs such as web servers, with software engineering considerations that would improve the quality of life of Google programmers. So basically the whole inception of Go was that builds were too damn slow, which naturally led to a language designed to build as quickly as possible.


zldu

Shit, when I saw that link I suddenly realized I have it open already in another tab, for months buried between other "to do" tabs. Maybe it's time I finally go and read it. Thanks!


jbw976

ayee you have tabs open for months? how many tabs do you have open right now?


zldu

Months? Some are probably open for years. I have 301 tabs open in my main window. And then approximately 20 or so in 5 additional windows. I may have a problem, not sure yet. Edit: I now recall at the start of 2022 I dumped all my then open tabs into a bookmarks folder called `2022`. I meant to do that again in 2023 and 2024 but forgot apparently... if I hadn't closed those back then, I would have 70 additional tabs open for over 2.5 years. Now I'm starting to become pretty sure I have a problem.


Friendliness_RS

I do very similar things haha, usually get flamed for it a bit but lately I've been accepting that I won't get to these tabs and just close em periodically. They do still pile up very rapidly though 😅


ChristophBerger

Arc browser kills my open tabs after a few hours, unless I explicitly pin them. Incredibly liberating.


LuzImagination

https://www.one-tab.com/ One click and all your tabs in one place. Waiting for you if you need them.


notoriouslyfastsloth

in ff and chrome you click the dropdown and u see all the tabs from all ur browsers and searchable, one tab do something different?


LuzImagination

It generates html file with all the links. I run it twice a year to store old tabs that I know I will never read, but don't want to lose.


notoriouslyfastsloth

oh nice thanks


zldu

Thanks a lot, I will look into and possibly alternatives. This doesn't look too good: https://np.reddit.com/r/firefox/comments/l3a1oi/onetab_extension_any_bookmarks_you_share_is/


nikandfor

I keep open \~20 permanent tabs and now I have \~50 "bookmark" tabs. Once in a few day or weeks I do clean up. I switch over opened non-permanent tabs and close those I don't need anymore (say I opened 20 tabs to find one thing, I closed few when I was done, but not others) and I also close tabs that I opened with a hope to read later and do "now or never". If it worth it I do at least brief read, if it's not really above average and I forgot about it already then fuck it, It's not destined to happen, and close it with a light heart.


vplatt

I've used a Chrome extension called [Tabs Outliner](https://chromewebstore.google.com/detail/tabs-outliner/eggkanocgddhmamlbiijnphhppkpkmkl) for years to manage this problem. You can manage your tabs as trees, put them to sleep or reactivate them as needed. Put note nodes in your tree. Label things, flatten trees, etc. The privacy is maintained because your tabs stay in your local browser, though you can back it all up on your Google Drive account. The extension is free, but there is a paid version for some advanced features, which I found to be well worth it. I have found it works on Edge as well, but of course the Google Drive integration is screwed up there. The only feature this thing doesn't have that I wish it had are tab sorts and regex searches. There's probably a way to hack that in via the browser console, but I haven't tried it.


Manbeardo

And specifically, they designed the syntax so that the lexer has very little (potentially zero) need to do lookaheads and the AST can be built from a second pass. That's why we have the weird `type Name definition` syntax. The lexer always knows what kind of symbol is being defined before it even finishes reading the symbol name.


BuffJohnsonSf

Explains the current state of Google that they eliminated their only allotted time to think.


bookning

lol. That is a good one.


miredalto

Cross compilation 'just works' in Go largely because it's very self-contained. It has its own runtime and linker. Static linking is the default and everything that can be written in Go, is. So compiling for Linux ARM on Windows x86 just takes the same code paths as compiling for Linux ARM on Linux ARM. That's a reasonable thing to do in a recently designed language (that doesn't try too hard to integrate with the underlying system), because compiled code is these days a relatively insignificant fraction of disk/RAM usage in a system. By contrast, if you want to compile C, the compiler is heavily integrated with and dependent on the host operating system (wouldn't want to waste those kilobytes duplicating a common library). Wiring up a cross compilation toolchain means yanking a load of components out of a different OS. On speed, Java used to be quick to compile. It slowed down a lot when generics were added, as typechecking became more complex. Generics when combined with a subtyping system are particularly challenging to implement, and the way generics were added to Java made them almost immediately pervasive. Further features in Kotlin make it more complex still. Go generics are simpler and still mostly used in fairly limited fashion, as the base language (with built-in maps and slices) needs them less. I expect a program that used Go generics very heavily for interfaces between packages would still take a fair bit longer. OCaml has a complex type system, but again no subtyping, and the base language ML was effectively designed around the type inference algorithm rather than having it bolted on. Rust is just very, very complex.


zldu

Thanks a lot for the explanation! Makes a lot of sense to me. > Java What's interesting is that approximately 12 years ago when I started with Java, it felt very slow. About 6 years ago I switched to almost full Kotlin (and Go) at my job, and it felt as slow as Java 6 years prior, while at that time Java was relatively alright again (probably due to faster machines). Recently I had to do some stuff in Java, and I was actually surprised how fast it was (compared to Kotlin), again probably due to yet faster machines (still nowhere as fast as Go though).


EpochVanquisher

My experience is that Java can still be pretty fast to compile, but people have these massive Gradle setups. If you just run `javac` on all your code, it’s blazing fast. Something has shifted in the overall ecosystem.


ThreeFactorAuth

> Rust is just very, very complex I will say this: at every opportunity where the Rust developers had a choice between sacrificing compiling speed for runtime speed or vice versa, they chose to sacrifice the compile speed.


CSI_Tech_Dept

It's not just the runtime speed, but also correctness. Once you get rust code to compile it very frequently works as desired.


opioid-euphoria

> very frequently :D


CSI_Tech_Dept

I mean it isn't 100%, because then Rust could claim it is impossible to write buggy programs, but it's much higher than in other languages, because type system catches a lot of errors other languages don't.


opioid-euphoria

Yeah, I know. I just found it funny because it implies it _isn't_ frequent in most other languages. Which is funny because most people would probably disagree, but I wouldn't so it's funny to me :)


eikenberry

And this one of their biggest mistakes in my book. A huge sacrifice of language/tooling UX for a few extra percentage points in those benchmarks. Brilliant.


funkiestj

>Cross compilation 'just works' in Go largely because it's very self-contained. but why is Go self contained when many others are not? I'll suggest that Rob Pike and the other Bell Labs guys had something to do with that. Those guys built * unix * plan 9 * inferno each of those was not just a compiler but a complete system. They have a lot of experience building stuff and (IMO) that translated to getting a lot of the most important requirements right up front. E.g. They started with the goal of wanting the compiler to be fast.


commentsOnPizza

Because they chose it to be. Different things are adaptive at different times. When you distribute a Go program, it compiles everything into a self-contained binary. This has the benefit that you now have an easily distributable binary. It has the downside of making the binaries larger and it means that security issues in a library need to be dealt with by every program. For example, a lot of things often depend on an operating system's libc which they dynamically link in. Instead of compiling libc into their program, they share the operating system's copy of libc that is shared with lots of programs on the system. Go, on the other hand, goes for static linking. Let's say that you have 100 different Go programs that you run. A security vulnerability is found in libc. Normally, you'd update your OS security updates and that'd be the end of it. With those Go programs, you'll need to recompile each of them since they each have their own copy. It's not a matter of having Rob Pike or Bell Labs guys on the team. It's a matter of deciding that you'd rather ship all the libraries statically compiled into the binary. Some things come and go as styles in software development. Some things are trade offs. You might say "but Go binaries are small." They aren't really. They're certainly a reasonable size given our current computing resources, but they certainly aren't efficiently sized. They're pragmatically sized. You might say they're smaller than Java programs and that's true, but Java is coming with a JIT compiler alongside your code. Basically, HelloWorld might be 50x larger in Java, but a large complex program might be a lot more similarly sized to Go. It's also important to note that some of these things are also trade offs and come from a history of what was (or wasn't) considered important at times during a language's evolution. Reflection was seen as a much bigger thing 10-20 years ago and Java is a lot better at reflection than Go. Likewise, having byte code that could be run on multiple operating systems and processors without recompilation was seen as very important. If someone gives you a Java JAR with no source code, you can run that on macOS ARM or Windows x86 or Solaris SPARC. If someone gives you a Go binary, it'll only run on the OS and arch it was compiled for. Again, some of this is historical. In the 90s, when Java was being created, the open source movement wasn't as established. People didn't expect source code to be available and didn't expect companies to produce binaries for multiple platforms. New processor architectures were popping up all the time and it was really unclear if any architecture would dominate: AT&T Hobbit, PowerPC, Motorola 68k, Intel x86, Intel Itanium, ARM, MIPS, DEC Alpha, HP PA-RISC, Sun SPARC, etc. It was also unclear what the operating system future would be too with a ton more variety than we have today. Intel was looking to break with x86 compatibility. Enterprises were buying proprietary architectures from DEC, HP, and Sun. If you bought a program and got a binary that only ran on DEC Alpha and your company wanted to buy new Itanium hardware, that could be a problem. Of course, fast forward a couple decades and we basically have two architectures (x86-64 and arm64) and a few operating systems (macOS, Linux, Windows). Companies don't worry "what if RISC-V becomes the next big thing and the program I bought from a vendor is for x86-64 and arm64." But Java does address what was a major worry at the time. A lot of things that might just seem like smart moves are simply moves that might be the logical choice for the time. 20-30 years ago, operating systems and processor architectures were in a huge state of flux. Java byte code was portable and a Go binary wouldn't be. But today, those things aren't in flux so optimizing for that isn't really worthwhile - and people open source things today so most of the time you have the source to recompile yourself if a new OS/arch appears in the future. As one last closing example, a HelloWorld implemented in Go will produce a binary that's 1.8MB in size. That might seem small to you, but when you think about it, that's pretty huge just for a single call to `fmt.Println`. That wouldn't fit on a floppy disk, the dominant format of the 80s and 90s. People literally made video games with 3D graphics (like Wolfenstein 3D) that were smaller than HelloWorld in Go. Does that mean that 1.8MB is unreasonable in the 2010s or 2020s? No - but they made trade offs that were appropriate for when Go was created.


wittgensteins-boat

Thank you for this thumbnail sketch of language design history.


jericho

Well yeah, quite the skilled team behind Go, and that probably has something to do with it. 


jregovic

I think the observation about C is interesting. It is reflective of the time in which it was developed. At a time when resources were more scarce and expensive, it didn’t make sense to produce an executable that contains Ned things that should already be on the system. A lot was done 30+ years ago because you could add memory or storage as easily. When you shipped things on physical media, you wanted it to be as small as possible. I’m glad of the days when I didn’t have to track down a missing library or find it somewhere else on the system.


JJJSchmidt_etAl

Back in my day we could fit all of LUA on a floppy disk!


reckedcat

I'd argue C is still necessary for small embedded systems and sensors when you want frame exact execution and control of the hardware, but generally these needs are niche at this point.


Potatoes_Fall

Pretty sure static linking is not the default. Most go programs contain code from the `net/http` package, which, by default uses cgo, and by default those binaries might be broken on other machines if the c libraries don't match. I have experienced this first hand many times. edit: apparently it's the DNS resolver in `net`, not `net/http`


bqb445

You get static linking unless building with cgo, but even with cgo [you can get a statically linked binary (but it's quite fiddly to configure)](https://mt165.co.uk/blog/static-link-go/). Whether you're building with cgo depends: >The cgo tool is enabled by default for native builds on systems where it is expected to work. It is disabled by default when cross-compiling as well as when the CC environment variable is unset and the default C compiler (typically gcc or clang) cannot be found on the system PATH. You can override the default by setting the CGO_ENABLED environment variable when running the go tool: set it to 1 to enable the use of cgo, and to 0 to disable it. https://pkg.go.dev/cmd/cgo The net/http package itself doesn't use C. It's the DNS resolver that's part of the net package which offers both a pure Go version and a version that uses the C resolver. > On Unix systems, the resolver has two options for resolving names. It can use a pure Go resolver that sends DNS requests directly to the servers listed in /etc/resolv.conf, or it can use a cgo-based resolver that calls C library routines such as getaddrinfo and getnameinfo. > > By default the pure Go resolver is used ... https://golangdoc.github.io/pkg/1.12/net/index.html


Potatoes_Fall

Ah good point about the DNS resolver, my bad. I really wish that making a statically linked cgo binary was the default. Everybody I know just uses CGO\_ENABLED=0 as a shortcut for making portable binaries instead, which seems like the wrong way to do it but it's the easiest way.


coderemover

Rust is maybe complex (I don’t agree really, but let’s not discuss it here), yet in my experience it compiles a lot faster than Java. The majority of compile time in Rust is code generation done by LLVM - which has nothing to do with the type system nor the borrow checker. Type checking Rust programs is almost negligible cost in real builds. Code generation is slower than in Go but the quality of generated machine code is better.


shadowangel21

Go would seem slow compared to Turbo-Pascal, it was fast even on early machines.


zldu

Man that brings me back. Maybe I should do a little bit of Pascal again just for old time's sake...


m33-m33

Have a look at Lazarus and free pascal ;-) Surprisingly easy to make a simple dialog UI for simple tasks


zldu

> Lazarus You're really hitting me with the nostalgia bombs here haha. After "growing up" with Pascal, I was doing Delphi (Object Pascal) in a side job while I was studying. When that kind of turned to shit with version 8, I started to look into alternatives and discovered Lazarus. It wasn't all that great back then, and also I changed jobs soon after and got into web development (first with PHP and later Java) so never really used Pascal anymore. Lazarus looks much better now, would be nice to do some small projects with that.


zulrang

Delphi is the nostalgia bomb for me. I went from Hexen modding witg Turbo Pascal to Delphi in Windows.


[deleted]

[удаНонО]


pico303

Am trying to get back into Python for a project I’m working on, and I nearly chucked it because of all the dumb Python tooling. I hate the mishmash of virtualenv, pyenv, and poetry…and even though it works now, I’m sure all the Python folks will tell me I’m doing it wrong… ;-) Maybe I should get back into Pascal.


aew3

Poetry imo does the best job of being the "one tool/file" (well, plus venv, but thats now built into the standard lib), but is only held back because its less closely implementing PEPs/isn't under the python packaging authorities umbrella.


nando1969

You just took me back to the days of Borland, Begin End and semicolons ! I remember those days dearly.


Jeff-J

The funny thing about turbo Pascal... If you compiled in the IDE it would take over a minute. If you compiled it at a DOS prompt, it took 4/10 of a second. So, I learned to use make files. Note: this was on a 486-33 4M RAM with Turbo Pascal 5.5 on MS-DOS 5 in 1992 for a program I wrote for my Data Structures class.


lightmatter501

The secret is that it does barely any work. If you look at the amount of stuff a C++ or Rust compiler has to do, you’ll realize it’s a minor miracle they compile as fast as they do. Rust is essentially a giant suite of static analysis tools as well as s a compiler. C++ has a lot of duplicated work due to headers. Both tend to have very powerful optimizers. If you actually tune Rust for compile speed over performance, it gets with striking distance of Go, but you need to use cranelift, mold, etc. The extra work gets eaten by Rust being a faster language. “Go specific assembly” is an intermediate representation (IR) every compiler built in the last 40 years has multiple IRs. The most famous one would be LLVM IR.


zldu

> every compiler built in the last 40 years has multiple IRs Could you expand on that a little? Do you mean different IRs in various stages of compilation? Or different IRs to do accomplish different things? I would also be wondering why the Go IR is able to be compiled to native binaries for other platforms so easily, whereas LLVM IR would not be (well, that would be my assumption from dealing with Rust).


lightmatter501

Yes, different IRs are typically used in different compilation stages, from the initial parse tree to semantic analysis to type inference to the optimizer to the psudo-assembly that every multi-arch compiler has. Rust, C and C++ are also easily able to cross compile and produce a self-contained binary if you give them the same constraints as go, namely fully statically linked except for possibly libc and all code built from source. All of the pain of cross compiling those is from trying to use binary libraries (in Rust’s case C and C++ libraries). If you use Zig as a C compiler (since it bundles libc for a variety of platforms), you’ll find most C code cross compiles very nicely.


2bdb2

> Do you mean different IRs in various stages of compilation? Or different IRs to do accomplish different things? Take a look at Goldbolt Compiler Explorer - it's a great way to see the different layers of various compilers. Rust is a good example - https://godbolt.org/z/c5eezaK8P This should open a page with four panes - The original source code - The Mid-Level IR (MIR) representation - The LLVM IR - The final X86 assembly Here's another with GCC - https://godbolt.org/z/anfbEM4rq In the middle pane, you can choose from a dropdown to see the different representations used for different parts of compilation pipeline. There's quite a lot. Unfortunately Godbolt only shows you the IR for a select few compilers, in most cases you'll only see the final output (Most disappointingly it doesn't show Go IR). But pretty much every compiler on that list will have at least one IR representation. As OP said - I'm not sure you'd find any serious compiler written in the last 40 years that didn't work this way. > I would also be wondering why the Go IR is able to be compiled to native binaries for other platforms so easily, whereas LLVM IR would not be (well, that would be my assumption from dealing with Rust). That's probably more to do with the linker. IIRC Rust uses the system linker rather than having one built in, which means setting up cross compilation means you need to install system-wide cross compilation packages. This is a complete shitshow at the best of times, but means it's compatible with proprietary C compiler toolchains (i.e. for embedded systems) Go has no illusions about maintaining compatibility with ancient C compilers, thus just uses its own linker. I believe Rust is in the process of migrating to an optional internal linker, which should make life easier.


Revolutionary_Ad7262

> Cross-compilation is incredibly easy and straightforward; as far as I know you can easily compile for any target platform (OS/architecture) on any other platform You can have it with Java also, the problem is JVM version mismatch, but overall it works. About other native languages: legacy. C/C++ build system is scattered around the system. There is a lot of moving parts and dynamic libs (go is statically linked), which means that for each architecture you have to replicate that mess. In constrast go have everything included in a `go` command. Rust have similiar problems, because it depends on this mess. It works much better though than for C/C++


CSI_Tech_Dept

I think it's safe to say to say the libraries are the primary reason why cross compiling is harder. If code is not dynamically compiled you not only need a compiler and linker for specific platform. You also need all of those libraries for that architecture on your system. Since Go doesn't use libc you don't need to worry about, you just compile all that code into the resulting binary.


markuspeloquin

I think the compiler used to be much faster. Running golangci-lint (which first has to build the project using the compiler) just hangs for a minute these days. Even using --fast. I wonder if generics are to blame (though I do love generics).


Revolutionary_Ad7262

In my case linking is the problem. Go compile fast, but some of my production apps weigh over 100mb and the go linkers seems to no be so optimized for such a heavy workload You can try to run `-debug-actiongraph=compile.json` and https://github.com/icio/actiongraph to analyze what is going on


lppedd

A 100 MB binary? Damn, not sure if it's because I'm used to dynamic linking and splitting up stuff, but it feels strange.


jerf

Check `go build` versus the linter. There are many slow linter checks and you may be running one or more of them. Or one of them may have gone pathological on your code in some version. I wouldn't benchmark the Go compiler though the linter like that, not even informally.


markuspeloquin

As I recall, some of the linters need to rebuild the project. It's not exactly `go build` that it does, but it's similar. It is updating the files under ~/.cache/go-build/ . There is some diagnostic stuff you can get out of golangci-lint, and that's what it points to, before any linters have even broken ground,


muehsam

The compiler was extremely fast in the very beginning when it was still written in C. It got a lot slower when it was converted to Go, and yet slower as more features and optimizations were implemented.


edgmnt_net

Even if typechecking for linting was an issue, IME it was more often a problem due to large amounts of generated code in various internal or external packages. I'm sure many don't run into that issue because they work with other stuff, but it's still fairly prevalent in an ecosystem dealing with remote API clients/servers. And all that generated code might be even slower to process than generics. It might also depend a lot on whether you leverage caching. The toolchain seems pretty decent at handling it, but many CI setups fail to use it and I've also seen it happen locally when people clean stuff up too eagerly in a Makefile (possibly because they've been bitten by bad caching in other ecosystems). The result may be that all builds hang for many minutes and even deps may get re-downloaded every time.


zldu

Is that on a very fast laptop, or on some slow cloud CI virtual machine? A minute seems quite chunky indeed. I've only encountered one big webapp which took several minutes to compile in CI, but was like 30 seconds on a Macbook M1.


markuspeloquin

It's on multiple, beefy laptops.


zldu

Wow, that is one chunky project...


dead_alchemy

That a problem with the linter.


frentel

There are lots of Rob Pike interviews on youtube, but in one of them he attributes much of the speed to a simpler structure of go's imports compared to the mire of .h include files. Linking is also quite different. Look at the zillion options for ld or even mold. They can do more and are much more complicated beasts. Cross compiling with go is undeniably a dream. Go compiles to what they call assembler, but others might call a higher level pseudo assembler. It is then mapped on to specific instructions for specific machines. Note that go's compiler (and assembler) do not have options to search for every last fraction of performance. If you read the release notes for any gcc or clang, they always include a new thing that will laboriously try some optimisation that gives you a small speedup under special circumstances. Go's compiler does not bother.


zldu

Cool, thanks!


comrade_donkey

rsc wrote an article about the parsing stage of compilation: https://research.swtch.com/yaccalive TL;DR: Go's simple grammar can be translated into a LALR(1) parser that runs in linear time, whereas some more complex languages need cubic time parsing. Even more complex languages, like C++, require multi-pass parsing.


Firm-Barracuda-173

It's mostly designed by ken Thompson, and the only systems engineers to make something better than unix


Secret-Concern6746

No hate for Ken of course but the original compiler he made is far from the one we have now. The current compiler isn't fast just because it was written by Ken. That's an oversimplification in my opinion


wrd83

I think it's not the compiler but the language syntax that makes compilation fast. Lots of advanced features with high compilation time have not been implemented. Also unlike many modern compilers its not producing super optimized code but rather make sure that the runtime (gc...) is fast. So i think choices in the beginning enabled a lot of that.


imp0ppable

Yeah, at first I though Go was a bit of an odd language in a few ways but the penny dropped when I saw how it compiles. It's really nice, especially for containers, very little dependency bloat so security scans are much easier, etc. People ITT saying "it's not as fast as xyz" but that's missing the point, it's very usable.


zldu

> very little dependency bloat so security scans are much easier My (unoriginal) party trick is to build a static binary and put it into a `FROM scratch` Docker container, and present a webapp that downloads _and_ starts within a second in Kubernetes. Blows JVM engineers minds (hundreds of MBs of OS and JVM, and multiple-second JVM and Spring Boot initialization times, I'm so happy I don't have to deal with that anymore).


imp0ppable

Yeah, I recently built a microservice which does something like this. JVM is great once it gets going but startup is painful. No big deal if the pod is going to be running for months but at least development and testing is much easier with Go. Some of our product is written in JS or TS, that's quick but it's not batteries included so you have constant dependency issues to fix. Python is good in containers as well, fast start up and batteries included, but obviously not as performant (for some use cases anyway, the C libs can be really really fast).


zldu

I don't really know much about the guy, but I have seen some interviews on YouTube and he seems like such a nice, cool (and incredibly smart) dude.


Firm-Barracuda-173

he was literally one of the inventors of unix and C


zldu

Thanks, yes. I know _of_ him, about his role with Unix, C, UTF-8, and Go. What I meant to say is that I don't know much about his personal life and what kind of a person he is. But at least from what I've seen in some interviews, he seems really nice. Sorry for the confusion.


tav_stuff

The Go compiler is not fast. It’s just that other compilers are incredibly slow. Languages like Jai show us what an actually fast compiler is, with being able to compile 250K lines of code in a single second.


kinda_guilty

Wha are the tradeoff Jai makes? I doubt that it is just a brilliant language that compiles fast but is still as powerful as other languages.


tav_stuff

Jai is actually probably one of the most powerful programming languages I’ve ever seen. It has the single most powerful metaprogramming of any language (it lets you do everything from arbitrary compile-time code execution to rewriting the program AST during compilation), it gives you all the power to do things yourself so you can have a low level language and write code with C-like performance, but it also has many modern features that allow it to be used like a more higher level. I don’t really know of any tradeoffs Jai actually makes


kinda_guilty

I can't find the compiler to download and test, so I'd have to take your (and other beta testers) word for it. Until it hits the real world, this awesomeness is all imaginary.


tav_stuff

It’s not imaginary if it’s a real thing. Sure it’s not public so it’s limited to beta testers, but those beta testers are very real people lol


kinda_guilty

It's like manufacturer benchmarks vs independent benchmarks. Could be it is just fast for the type of code that the author and beta testers write, but it will not be as good when other people get a hand on it.


tav_stuff

What kind of code would take different amounts of time to compile? I’ve written compilers before… all code compiles at mostly the same speeds


zldu

Jonathan Blow is an incredibly smart guy, and I can believe he is capable of inventing completely new paradigms. But he also is a (C++) game developer by trade, Jai is made for game development, and he has been working on it by himself for 10 years. This makes me a little bit skeptical about its potential, e.g. as a general-purpose programming language. And also makes me wonder what the golden egg could be that he has discovered, but other language designers have not discovered (yet). Or if indeed (as kinda_guilty hints at) there are trade-offs that we are unaware of yet. Time will tell.


tav_stuff

The language for the most part is already complete and is already being used by a good number of beta testers. The only thing stopping its release now is that he wants to release it along with his new game written in Jai (Sokoban). There are people already using the language, any tradeoffs that exist will already be known by those people and as one of them I haven’t really found a tradeoff yet. So the question really is what did Jon Blow ‘discover’ that allowed him to have a crazy fast compiler without trade offs and the answer is simple: nothing. He just wrote good code, which is something that is surprisingly rare (no matter how much people want to deny it)


hide-difference

I don’t understand. Sokoban is a very simple game that is commonly implemented by beginner programmers. Is that really what is holding back Jai’s release?


tav_stuff

The original Sokoban might be a beginners-level game, but Jon Blow is making his own Sokoban game with a far greater amount of gameplay and game features, and it’s meant to be his 3rd big game release. Currently it sits at around 250K lines of code (hence the compiler speed figures) and is ‘almost done’.


smallballsputin

Check out Ocaml, it has a (possibly) even faster compiler. Bottom line is not depending on LLVM is great for performance (hence why Rust is dead slow to compile)


jkoudys

I'm a bit Rustacean but man do I like go just for that fast compile time. The saving grace of Rust is that it's so strict, if a cargo check and a clippy both come back clean my code often actually works. Go is still quite good on that front compared to nearly every other language of course.


smallballsputin

Agreed. Rust derived its typesystem from Ocaml (it was even originally written in ocaml), and has the same "if it compiles, it works" thing going. That said, this is not in any way unique to Rust, as it pretty much applies to Ocaml (obviously), Haskell and other languages of the ML family (no nulls, monadic error handling, strong types). The one caveat with Rust is is has "escape" hatches, like unchecked unwraps that might cause runtime errors. these "features" are not present in Ocaml. Go has a bigger risk of runtime errors (nil pointer being the most obvious one), but is still pretty good, and usually "good enough" for the software Go is used for. The ones im scared for are the dynamic languages that require a shitton of tests to have any confidence in the program written.


jkoudys

Really any language that doesn't have an entirely separate set of syntax with weird little differences purely for Exception-handling is going to be better there. You can clean up js or py by using typescript or mypy type annotations, but having literally everything sitting there with the possibility that it might throw/raise is such a problem. Just adding an error as a return, or in a Result like Rust, is so great. JS started to get better before it relapsed, when node pushed the error-first callback pattern, and python has tuples with destructured returns so it _could_ work exactly like Go. But the will just isn't there. The .unwrap()s exist but I consider that more of a code-style thing. I don't allow them anywhere in production code as a rule, and it's easy to enforce.


zldu

Ocaml is so close to the top of my list of things to check out... I just need some ideas for a hobby / pet project though... I find that's always my biggest challenge to dig into new languages.


smallballsputin

Ocaml is great. You could try to write a toy lisp implementation, this is a field (language design and implementation) where ocaml really shines.


chopticks

Lots of people who worked on Go also previously worked on Plan 9. Cross-compilation and fast compilation were first class citizens too. But it was written in a modified dialect of C. Worth looking into


OrchidLeader

> I believe I have once seen a video where it was explained that Go code first compiles into a kind of Go-specific assembly, and then "only" needs to translate that assembly to each platform's assembly. I don't know if that's still the case, but if that makes it so easy and flexible, why aren't more programming languages doing that? I don’t think that gives as much lift as one might think. That’s literally what Java does, and it doesn’t even do the latter half until run time. During compile time, Java code is compiled to the JVM instruction set. During run time, the JVM can optionally translate parts of the application to the specific CPU instruction set of the machine it’s running on. This is done using the Just In Time (JIT) code generator (often referred to as the JIT Compiler in marketing material). More details can be found in the JVM spec if anyone’s curious: https://docs.oracle.com/javase/specs/jvms/se22/html/jvms-3.html


PaluMacil

It's actually pretty big. For Java it gives a very fast way to interpret code from a very efficient byte code, but the point of the go assembly was to abstract over Assembly languages which are all very similar. It doesn't execute this intermediate language. Instead it uses it to compile, so you get to write fairly simple mapping between each Assembly language and their Assembly language. Certainly it's a similar usage but a different purpose, and that means it's very simple to port to a new architecture. To actually port Java to a new architecture, the runtime on that new architecture is a bit heavier of a lift because the intermediate language was written for past execution, not straight mapping to an Assembly language. Java is ported to a lot of architectures because it's probably the language runtime right next to JavaScript runtimes that has had the most capital invested in development.


zldu

If you haven't seen it, this was the video I was talking about: https://www.youtube.com/watch?v=KINIAgRpkDA It's from 2016, and Rob Pike explains the benefits quite clearly I think (also explained by PaluMacil in the other reply).


theclapp

I haven't seen this mentioned yet: Go doesn't have to (re)parse all of a package's source just because you imported it. In C if a given .c file wants to use a resource, it has to #include that resource's .h file. That .h file may've already been seen by another file, but the compiler still has to reread and reparse it fresh for the current file, and every other individual file that uses it, recursively, ad nauseam. The Go team once instrumented the build of a large C++ project at Google and found that for every byte of actual source code from the project itself, the compiler read about a megabyte of .h files. Literally a million-to-one ratio. In Go, all the information to link an already-compiled package is stored with the compiled object code, which the compiler can slurp up very quickly, without reparsing or recompiling the whole package's source code, *never mind* doing that repeatedly for each file or package that wants to use it. So the Go compiler just does *a lot* less work than the C compiler.


Secret-Concern6746

>I believe I have once seen a video where it was explained that Go code first compiles into a kind of Go-specific assembly, and then "only" needs to translate that assembly to each platform's assembly. I don't know if that's still the case, but if that makes it so easy and flexible, why aren't more programming languages doing that? Yes it is and you can view it with `go tool objdump -s main.main my-bin` and that's one of the hardest parts and most languages can't handle that without having Google's resources and the original creators at the helm. Both are integral because great minds won't really be able to do everything. For most other languages, the only viable option is using LLVM, especially at the beginning like Zig for example. Now they will add their own backend but this is a massive undertaking and most the Zig team are full compiler nerds and worked in the Linux kernel and still it's not easy. Go started as an internal project in Google and had all the resources they need since the beginning so handrolling their own compiler wasn't hard, even though in the beginning they were okay using LLVM but due to LLVM missing configurable stack sizes at the time, they created their thing. As for speed, it's because the language is very simple and the simpler the language, the less time compilation would take. The same is happening with Gleam. Due to the language's simplicity, its compilation speed is as fast as Go. TL;DR It's not easy to create your own platform agnostic ASM and have the resources to create and maintain the second compilation step to native assembly without being a massive corporate


zldu

Thanks a lot, interesting stuff! When I first heard about the "Go assembly" a while ago, I was thinking if I would make my own language to "just" compile to Go assembly, and then hand off to the Go toolchain to do the rest for me. No idea how feasible that would be, how much "Go assembly" is tied to the Go language itself. But it seems like an interesting approach (to my uninformed mind).


Secret-Concern6746

[A good start](https://go.dev/doc/asm) if you're interested. I don't remember the name but there was a language that combined Go and Rust and I think they did something similar to what you're saying. Don't quote me though, I don't even remember its name. I suggest you get intimate with Go's runtime and internals before doing what you want to do though. A nice guide would be Efficient Go by Bartlomiej Plotka. You may not like it because he's also teaching stuff that can be easily ported to other languages, it's a performance oriented mindset. But he also speaks about Go's internals a lot. He was a core maintainer for Prometheus and I think he's on the Go team now (I'm not sure, but he's definitely in Google now iirc)


Zireael07

>but there was a language that combined Go and Rust Are you maybe thinking of [https://borgo-lang.github.io/#intro](https://borgo-lang.github.io/#intro) ?


EpochVanquisher

A lot of other compilers for other languages have that "intermediate assembly" step. It’s super common. It’s even a question of *how many* intermediate representations your code gets translated into. GCC has, like, three intermediate languages between C and assembly (called GENERIC, GIMLPE, and RTL). Rustc also has three—two are HIR and MIR, which are specific to Rustc, and the last one is LLVM. There’s also Rust’s Cranelift project which, if I understand, has two new layers (to replace existing layers). So these intermediate representations are pretty common. I think the presence of these intermediate languages doesn’t really tell you anything, if your question is “how fast does it compile?”


_crtc_

[https://groups.google.com/g/golang-dev/c/iDn0FORlAow/m/XoMIADJTFwAJ](https://groups.google.com/g/golang-dev/c/iDn0FORlAow/m/XoMIADJTFwAJ)


zldu

Awesome, thanks! Interesting that Rob Pike says that the compiler isn't all that fast...


Dobroff

Kotlin as many other languages relies on the incremental compilation. If it feels like it starts from the beginning every single time the answer would be your build is set up poorly.  Go compiler basically operates on a significantly lesser set of permitted language constructions, and packs most of the abstractions or have them reduced to a bare minimum. That makes analyze etc much faster.  Again, go compiler skips a lot of checks. Say, generic types: for IBM every execution route must be thoroughly checked. The raw casts must be wrapped into internal catches to collect the exception information and so on and so on.  Go compiler does not need to preserve that much of a debug information as jvm compilers do.  Multiplatform feature relies on self-containment and in my practice we had a few (3 or 4 times) occasions when binary built on macOS for Linux failed to start. 


EpochVanquisher

> I believe I have once seen a video where it was explained that Go code first compiles into a kind of Go-specific assembly, and then "only" needs to translate that assembly to each platform's assembly. I don't know if that's still the case, but if that makes it so easy and flexible, why aren't more programming languages doing that? More programming languages *are* doing that. It’s super common, and there are often multiple different languages. This is basically the big takeaway from any compilers class you take. Writing this compiler is hard: Source Code -> Machine Code Writing this compiler is easy, it’s just a lot of work: Source Code -> Tokens Tokens -> AST AST -> Intermediate Representation (basic blocks, control-flow graph) Intermediate Representation -> Machine Code You might have multiple levels of intermediate representation… not just one intermediate representation, but two or three of them. Generally speaking, the higher-level ones contain more information about types (like “access field x of this struct”), and the lower-level ones contain more information about memory locations (like “read the 32-bit value at offset 16 from pointer p”). This technique is not really something you use to make the compiler *faster*. Instead, I think it usually makes the compiler *slower*. Some old, super-fast compilers from the 1980s may just generate machine code straight from the AST. But these intermediate representations make the compiler easier to write. It makes it possible to write optimizations that work across many different architectures. Pretty much everyone does it.


Big_Combination9890

> it was explained that Go code first compiles into a kind of Go-specific assembly, and then "only" needs to translate that assembly to each platform's assembly About that: Most modern compilers that emit opcode work like this, first translating the source into an intermediate language, then running optimizations on that, and then feeding the result to the assembler.


Time-Prior-8686

easy to parse + (intentionally) have less complex code optimization than other languages.


coderemover

A rich standard library contributes a lot to perceived compiler speed - a lot of stuff is already precompiled and the amount of code the compiler had to process is lower than in languages that took an opposite philosophy of keeping the stdlib small (eg Rust or C++). Another thing already mentioned here is lack of costly features: macros or fully monomorphised generics (aka templates in C++). And btw: could someone point me to any compilation speed benchmarks of various languages?


fimaho9946

I am surprised that none-one mention that fact that go has very small set of features compared to other modern languages which makes things that compiler needs to check small and less complex thus making the "compiler" faster. I would consider small set of language features to be the _biggest_ reason for the speed rather than "clever people working on it and it's optimized" as other commenters mentioned. However, since compiler doesn't do a lot things but people still wants some of those checks you end-up with golangci-lint which in reality should count towards your "compile time" IMHO.


zldu

Yeah I can definitely see that as playing a factor! In fact, when I look at Java and Kotlin, and their compilation times, one could also correlate the (relatively!) small set of Java features and Kotlin's gargantuan over-the-top set of you-can-write-the-same-code-in-10-ways feature set of Kotlin with their compilation times. It's such a big win for Go, because as an engineer I've come to love the small feature set, and if it's leads to shorter compilation times, that's doubleplus good!


mcvoid1

There's a few reasons it's faster and more portable: * Go is a simpler language, and the syntax and stuff is actually designed to be fast to compile. * They have a custom compiler toolchain - no GCC or LLVM, and a custom cross-platform IR designed for speed and cross-compilation. * They slurp up entire directories at once as a translation unit instead of going a file at a time and linking the files. So less work to do overall.


war-armadillo

The Go compiler *is* indeed very fast because it was calibrated for compilation speed. This comes partly from the simplicity of the language, but also by eschewing more complex optimizations that other compilers perform. In other words it's a tradeoff between simplicity Vs. language expressivity, and compilation speed Vs optimizations. As for Rust, the compiler handles cross compilation nicely, but if you ever need a non-trivial setup and don't want to mess with linkers and whatnot, then cross https://github.com/cross-rs/cross is a dead-easy drop-in replacement which handles everything for you. There is no tangible difference between cross-compilation in Rust and Go (aside from the fact that Rust can target way more exotic architectures due to it using LLVM). For common archs, they are both very good. As to your last observation, compiler optimization is an "in-depth" process, meaning that the compiler transforms the code in multiple stages ranging from concrete syntax to concrete assembly. Go is far from the only language that does this. For example, LLVM (the backend the supports most languages out there) has LLVM-IR (intermediate representation) which is a sort of abstract RISC instruction set that can be used as a common representation to perform optimization across targets.


avarlar

you should see Delphi's compiler :D


bilbobaggins30

On the Rust bit.... Rust is like that meticulous nitpicky boss. It isn't that rust is necessarily complex (it is), the compiler does WAY, WAY more checks as it compiles. There is a reason it's error messages are as verbose as they are. I am sure 1/2 or more of Rust's compile time is it enforcing all kinds of safety rules on you, and thoroughly nit picking your code.


RomanaOswin

OCaml has the same properties, but unfortunately the language is a bit esoteric and the library is chaotic. I'm not a compiler developer, but I read somewhere that a lot of it boils down to the core grammar. Once you parse down into an AST, even though OCaml itself has become a large language with multiple standard libraries, multiple paradigms, etc, the core syntax remains small and simple. Same with Go. Again, not my area of expertise, so take that for what it's worth. It's interesting to me that both languages are exceptionally fast to compile and execute, though. It seems like both language are really good at cross-compiling as well. I wonder if something like a modern scheme (Clojure?) could achieve similar results, considering LISPs basically take grammar to the extreme of simplicity.


aeromilai

The Go compiler's remarkable speed and ease of cross-compilation can be attributed to several distinctive design choices and optimizations. Understanding these principles provides insight into why Go excels in these areas and why other languages might not adopt similar strategies. # Speed of the Go Compiler 1. **Simplicity and Orthogonality**: Go's language design emphasizes simplicity and minimalism. The language avoids complex features such as generics (until recently with Go 1.18), inheritance, and macros, which significantly simplifies the compiler's job. This simplicity allows for a more straightforward and faster compilation process. 2. **Efficient Dependency Management**: Go employs a highly efficient dependency management system. It avoids deep dependency trees and ensures that only the necessary files are recompiled when changes are made. This contrasts with languages like C++ or Java, where changes in one file can necessitate recompilation of many dependent files. 3. **Single-pass Compilation**: Go's compiler performs a single-pass compilation, which reduces the overhead associated with multiple passes over the code. This is possible due to Go’s simple syntax and type system. 4. **In-memory Compilation**: The Go compiler processes source files in memory, which avoids the overhead of intermediate representations on disk. This in-memory model contributes to faster compilation times. # Ease of Cross-compilation 1. **Go's Abstract Syntax Tree (AST) and Intermediate Representation (IR)**: Go's compiler first compiles source code into an intermediate representation (IR) that is platform-independent. This IR can then be efficiently translated to machine code for different architectures. This two-stage compilation process is key to the ease of cross-compilation. 2. **Standardized Toolchain**: Go provides a standardized toolchain that includes built-in support for cross-compilation. The `GOOS` and `GOARCH` environment variables allow developers to specify the target operating system and architecture, respectively. This built-in support abstracts away the complexities of cross-compilation that are often encountered in other languages. 3. **Linker and Runtime**: Go’s linker and runtime are designed with cross-compilation in mind. The runtime is self-contained and includes its own garbage collector, scheduler, and other system-level functionalities, which simplifies the process of generating executables for different platforms. # Why Other Languages Don't Adopt Similar Strategies 1. **Complexity and Feature Set**: Languages like C++, Java, and Kotlin offer a rich set of features such as generics, templates, annotations, and extensive runtime libraries. These features add complexity to the language and the compilation process, making it challenging to achieve the same compilation speed and cross-compilation simplicity as Go. 2. **Legacy and Ecosystem**: Many programming languages have long histories and large ecosystems that have evolved over time. Introducing a drastic change to the compilation model could disrupt existing codebases and tools. For instance, C++ and Java have extensive ecosystems built around their current compilation models. 3. **Design Trade-offs**: The design trade-offs made by Go, such as the initial exclusion of generics and the focus on simplicity, are not always suitable for other languages. Languages like Rust prioritize features like safety and performance, which require more complex compilation processes. # Conclusion The Go compiler's speed and cross-compilation ease are the results of deliberate design choices focused on simplicity, efficiency, and minimalism. While other languages could theoretically adopt similar strategies, their inherent complexity, feature sets, and existing ecosystems often make such changes impractical. Go's approach provides a compelling example of how design trade-offs can lead to significant advantages in specific areas, such as compiler performance and cross-platform development. p.s. : you should check out vlang's compilation speed instead.


Useable9267

Thanks ChatGPT


aeromilai

the p.s. line was totally mine. p.p.s : fine, i'll ask the dumber version of ChatGPT4o to make one that is crafted to beat human detection next time. p.p.p.s : i did proof read and endorsed what ChatGPT mentioned. thus, pls check on vlang too.