Wow, I didn't even know this job existed. IMO Rust as a C++ replacement is fine, Rust as a C replacement has more trade-offs than I still care to make. C is still far simpler (you can still read K&R in one day and keep most of the language in your head), has faster compile times, and the pain points cough macros are still often pain points in Rust.
I think the biggest thing is that systems programming still requires a language that gets out of the way so you can focus on very technical problem domains where what the hardware is actually doing really matters. Rust is a language designed to get in your way and force you to create type abstractions. Adding too many abstraction can be exceedingly dangerous in an environment where not having a full view of how memory and hardware registers are laid out leads to even worse errors than just buffer overflows. IMO Rust makes this type of programmer more difficult just as C++ does.
The abstraction layer that one can build with rust allow the programmer to actually focus of the actual business logic instead of trying to get low level details right.
The innovation of Rust is the borrow checker, which is primarily of interest to systems programmers. If your primary interest is highly abstracted business logic, there are tools that don't require manual memory management or being pedantic about the different types of strings. You could just use Go, Java, Haskell, Python, etc.
1. Rust doesn't force you to do manual memory management. Rust memory management is automatic by default and only if you really, really want to, you can do it manually.
2. Memory is not the only resource. The GCs in languages you listed only solve the memory management problem, but regarding the other types of resources, their ergonomics are often worse than C - you have to remember to close the resources manually and you get virtually no help from the compiler.
3. None of the listed languages address problems related to concurrency, e.g. data races. Ok, Haskell kinda avoids the problem by imposing other restrictions - by not allowing mutation / side effects ;)
4. Rust offers way better tools for building high level abstractions than Go, Python and Java. It has set a very high bar with algebraic data types, pattern matching, traits/generics and macros.
For embedded software, the underlying code basically reads and writes a bunch of registers. Unsafe memory access with side effects. The benefit of using rust here is that you can easily model these access patterns to make an api that cannot be abused. So the driver reads and writes addresses and the user code operates through the driver with all the benefits of ownership at hand to avoid race conditions and other foot-guns.
So, in this way rust does indeed “solve for” ownership of devices. You can’t have two threads (or interrupt handlers) mutating the same device without satisfying the ownership rules.
Does this include all 193 cases of undefined behavior?
a) UB enables valuable optimizations and is important to keep (or even add) when performance matters
b) UB makes the language unusable/insecure to anyone but genius level experts and should be avoided
Whenever someone (including famous/relevant people like Dennis Ritchie [0], DJ Bernstein [1], or Linus Torvalds [2]) tries to suggest cleaning up, removing, or simply not adding new cases of undefined behavior in C/C++, the optimization experts come running from the other room screaming about how important it is that "signed integer overflow must be undefined" [3] or else things will run a percent more slowly (signed overflow being just one example of UB). Also there are people who suggest adding new UB to Rust [4].
So really, either Rust is significantly slower than C because Rust doesn't have the UB you're criticizing, or C could be a cleaner language without compromising on speed and the compiler writers and standards committees are wrong. You choose, but both options are considered heresy.
[0] https://www.lysator.liu.se/c/dmr-on-noalias.html
[1] https://groups.google.com/g/boring-crypto/c/48qa1kWignU
[2] https://lkml.org/lkml/2018/6/5/769
An example of this, is aliasing mutable references. Rust has been designed to assume that two mutable refs will never alias, if you attempt to do this it is instant UB. My understanding is that even creating the aliasing reference is UB, even if you don't use it.
Another example would be uninitialized values. In rust, the compiler assumes that values are always "valid" and initialized. Since you need to allocate memory before writing to it, you need some way to safely have uninit values, and this is what the MaybeUninit wrapper type is for. The wrapper allows you to safely have uninit values, and once you write to them you can tell the compiler that they are initialized, but if you tell the compiler before on accident, it is UB.
References also are guaranteed to point to initialized and valid data, and can never be null, though my understanding is that there is some uncertainty about the exact rules of this with regards to uninitialized values and the exact semantics may change in the future.
(There are also a lot more things that I don't know very much about!)
All of these things are assumed to never happen, and optimizations are performed based on that assumption.
The nice part about rust, is that it makes it impossible to represent invalid states using the type system!
For aliasing, you can only have a single live mutable reference at any time, attempting to create a second one while another is live is a compile time error!
For uninitialized values, you simply can't create uninitialized values at all in safe rust. My understanding is that the only way to create an uninitialized value is with the MaybeUninit type or by using raw pointers.
So rust still has heaps of UB, but it doesn't allow you to do it by default, so you still get some of the optimizations you'd expect.
I think there are some cases where rust is missing out on optimizations though, like with signed int overflow for example, and probably more that I don't know about!
However, Rust, the language has its issues on embedded.
Rust's ownership model is directly at odds with lots of embedded. These bits inside a register are owned by the ADC and these bits inside a register are owned by the DAC is not a happy thing in Rust.
Lack of arbitrary sized integers and how they slice.
Cargo. Quite annoying to deal with cargo and an embedded toolchain. The Rust embedded guys have done really good work if you're on ARM. If you're not, good luck.
That having been said, if you have to go implement something like Reference Counting in something not Rust, you will weep tears of blood debugging every single time your reference counts go wrong.
Embedded is engineering. It has tradeoffs. That's life.
1. A lot of the APIs make use of the typestate pattern, which is nice, but also very verbose, and might turn many people off.
2. The generated API documentation for the lower level crates relies on you knowing the feel for how it generates the various APIs. It can take some time to get used to, especially if you're used to the better documentation of the broader ecosystem.
3. A bunch of the ecosystem crates assume the "I am running one program in ring0" kind of thing, and not "I have an RTOS" sort of case. See the discussion in https://github.com/rust-embedded/cortex-m/issues/233 for example.
For example, I see inefficient patterns that are common in frontend world but have no place in an embedded system being promoted as "proper" way of doing things.
It appears most of the peripheral support libs (eg those that use Embedded-HAL traits) are not designed with practical ends in mind; I've found it easier for all I2C/SPI etc devices I've used to start from scratch with a `setup` fn with datasheet references, than DMA transfers. So, you have these traits designed to abstract over busses etc; they sound nice in principle, but are (so far) not useful for writing firmware.
I get a general sense that the OSS libs are designed with a "let's support this popular MCU/IC, and take advantage of Rust's type system and language features!" mindset. A bare minimum is done, it's tested on a dev board, then no further testing or work. There are flaws that show up immediately when designing a device with the lib in question.
So, at least for the publicly-available things, they're designed in an abstract sense, instead of for a practical case.
svd2rust is pretty good for having safe abstractions for hardware registers. That said, as an example, no, the type system doesn't prevent you from deallocating your DMA buffer while the hardware is using it--I don't think it's reasonable to add that to the type system (and the type system right now doesn't know about DMA).
I think re DMA buffer lifetimes, the easy approach is static buffers; they never drop.
const uint8_t zero[] = {
CHAR_GRID(
_,X,X,X,_,
X,_,_,_,X,
X,_,_,X,X,
X,_,X,_,X,
X,X,_,_,X,
X,_,_,_,X,
_,X,X,X,_
)
};
desugared to a column-major array of 5 bytes. #define CHAR_GRID(c1r1, c2r1, c3r1, c4r1, c5r1, \
c1r2, c2r2, c3r2, c4r2, c5r2, c1r3, c2r3, c3r3, c4r3, c5r3, c1r4, c2r4, c3r4, c4r4, c5r4, c1r5, c2r5, c3r5, c4r5, c5r5, c1r6, c2r6, c3r6, c4r6, c5r6, c1r7, c2r7, c3r7, c4r7, c5r7) \
c1r1 | (c1r2 << 1) | (c1r3 << 2) | (c1r4 << 3) | (c1r5 << 4) | (c1r6 << 5) | (c1r7 << 6), \
c2r1 | (c2r2 << 1) | (c2r3 << 2) | (c2r4 << 3) | (c2r5 << 4) | (c2r6 << 5) | (c2r7 << 6), \
c3r1 | (c3r2 << 1) | (c3r3 << 2) | (c3r4 << 3) | (c3r5 << 4) | (c3r6 << 5) | (c3r7 << 6), \
c4r1 | (c4r2 << 1) | (c4r3 << 2) | (c4r4 << 3) | (c4r5 << 4) | (c4r6 << 5) | (c4r7 << 6), \
c5r1 | (c5r2 << 1) | (c5r3 << 2) | (c5r4 << 3) | (c5r5 << 4) | (c5r6 << 5) | (c5r7 << 6)People who say this somewhat perplex me. Yes you can get the syntax of the language down in a day, but that does little to stop you from running into your first Bus Error or Segmentation Fault within the first 30 minutes of trying to write any software, not to mention all the hidden errors/exploits you've put in your code that are only a platform switch or a compiler version change away from being found explosively. And you can completely forget trying to write a multithreaded C application, which basically confines you to very slow single-threaded code, completely tanking performance versus even the slowest dynamic language that supports multithreading, erasing any advantage for using C.
This is not a personal attack but when I have to try to come up with an assumed background for people who say this it usually involves some assumptions that the person isn't keeping in touch with the "real world" of some sort. I have trouble rationalizing it otherwise. Thus I'll usually ask what their background is when they say this to try to make sense of things.
The only places C is still the optimal choice is where C is already being used or in extreme platforms where there aren't good toolchains (various ASICs/rare 8bit microprocessors). There's zero reason to use it otherwise.
> the pain points cough macros are still often pain points in Rust.
Hygenic syntax checked macros are an entirely different animal than just string insertion/substition macros. I don't think this comparison is fair.
The abstractions can be more used like static interfaces you want to reuse. E.g. a byte stream interface, a regmap interface, etc.
Just the fact that Rust doesn’t do implicit integer conversions is by itself a huge win over C which has promotion rules that can easily trip you up when you are trying to exactly specify bits.
Except that isn't what most compilers expose, including the UB semantics.
One is in for a sea of surprises when trying to write portable C code and using K&R C as language reference.
If the answer is still No, then it can't replace C.
We do have plans for adding backends for unconventional targets eventually.
* Macros
* Ownership and borrowing
* Async programming
"Async programming is the area I would like to see the most improvement, especially in the standard library.
So much concurrent and parallel Rust code relies on third-party libraries because the standard library offers primitives that work but lack the "creature comforts" that developers prefer.
It would be really nice if the Rust standard library were to get structured concurrency similar to what Ada has:
https://en.wikibooks.org/wiki/Ada_Style_Guide/Concurrency
https://learn.adacore.com/courses/Ada_For_The_CPP_Java_Devel...
The path Rust is going means async becomes viral, and is something I dislike a lot about JavaScript[0] and other languages I’ve worked in[1].
I’d love to see Rust avoid this trap.
[0]: I work in TypeScript in actuality not sure which to use here. It’s certainly by far the language I’ve used the most in my career now.
[1]: I remember it infected Python too and it was a pain as well when I did Python development years ago.
For those not aware of the history and looking for background, I laid it out here: https://www.infoq.com/presentations/rust-2019/ and here https://www.infoq.com/presentations/rust-async-await/
Those style systems are useful and have advantages, but they also have disadvantages. Not every tradeoff is a good call for every system, and that goes both ways in this scenario.
So, someone may like exceptions and green threads more than `Result` and `async` (and this is a completely valid PoV, even though I personally like the explicitness better), but thinking `async` is somehow special is just a conceptual mistake.
Edit to give an little more substance to the parallelism:
If you want to call a fallible function inside an infallible one, you MUST handle the result. If you want to use `?` then your function MUST return a Result.
Symmetrically, if you want to call an async function from a non-async one, you MUST `spawn` the future. If you want to use `await` then your function MUST be async.
The only practical difference between async functions and functions returning a `Result` is that `Future` is a trait, not a struct like `Result` (and that means that your future may have a lifetime that's not visible in your function definition, which is an endless source of confusion for beginners).
The new experimental languages with effect types might be able to give us the best of both as they actually expose what you are talking about as an abstraction at the type level. We will see.
Minor nitpick, but it's an enum.
https://github.com/rust-lang/rust/blob/master/library/core/s...
But Rust is not a language which can dictate its execution environment. It needs to be able to exist in a C-ish world, and that's not something that supports yielding. It's a shame, but at least you can write kernel modules in Rust.
Ada is also used in embedded and low-level environments where the execution environment can be limited. The way that Ada "gets around" such limitations is through language "annexes". Annexes are optional language extensions for specialized use cases:
https://www.cambridge.org/core/books/abs/programming-in-ada-...
http://www.ada-auth.org/standards/22rm/html/RM-1-1-2.html
Rust partially does this already with [no_std] (https://docs.rust-embedded.org/book/intro/no-std.html), so the concept is not too different.
I agree with you on this in case of high-level languages. Rust is not that, and wouldn’t be half as interesting that way — but by going the system/low-level language route it does have to make certain design decisions that are not ideal. They can’t do what java’s loom does as it requires knowing every method implementation, which is fine with a fat runtime, but is not possible in case of Rust, with plenty FFI boundaries, etc.
(Generally, I avoid this problem these days by avoiding threads in favor of other abstractions or multiple processes communicating over an RPC channel).
> So much concurrent and parallel Rust code relies on third-party libraries because the standard library offers primitives that work but lack the "creature comforts" that developers prefer.
This seems to be an repeated antipattern with a lot of languages/ecosystems, resulting in fragmented and half-baked solutions. A fully-featured async standard library involves making a lot of opinion-based decisions, and not everyone will be happy. But it's better for 80% of people who probably don't care that much, and nothing stops the other 20% from implementing libraries for their use-cases.
I think elixir's sigils are probably the closest thing I've seen to "routine, encouraged macro use." Since almost every application will end up with a bit of template lite almost-dsl pseudo language for something or other. They're simpler than defining a grammar & writing a parser and more maintainable than regex.
So, while macros are "discouraged" in Elixir, in practice they are very much encouraged by several prominent libraries. Picking on Phoenix is very easy because it's so blatantly bad in this regard (and others) but it's almost impossible to do useful things with Ecto if you go outside the macro bubble, etc., as well.
Example that shows how an eco system that definitely could have done stuff with macros (Clojure) has correctly decided that writing functions that take data is better than using macros:
Elixir and `Plug.Router`:
defmodule MyRouter do
use Plug.Router
plug :match
plug :dispatch
get "/hello" do
send_resp(conn, 200, "world")
end
forward "/users", to: UsersRouter
match _ do
send_resp(conn, 404, "oops")
end
end
Clojure and `reitit` (https://github.com/metosin/reitit): (def router
(r/routes
[["/hello" {:get (fn [r] {:status 200 :body "world"})}]
["/users" {:name :users
:router users-router}]
["*" {:get (fn [r] {:status 404 :body "oops"})}]]))
P.S. I've used Elixir since 2015, this is not an opinion I've developed at a glance.On a more serious note, "I want other people to write my code, but they're not following my standards" is rarely a sympathetic point of view.
For example, Rust std lib has blessed Mutex. Even though these can't be used in the kernel, it is still good to have them in the std lib for >90% of normal crates.
> The respondents said that the quality of the Rust code is high — 77% of developers were satisfied with the quality of Rust code.
Well, that’s exactly what I’d expect Rust developers to say. Nobody loves Rust more than Rust adopters. Would be interesting to see more objective measures of code quality (e.g. defect rate)
Also, the type of person to work on a Rust codebase might also be more likely to write high quality code in any language, as compared to the average developer (or even average Googler).
This creates a self-selection, where Rust lovers work on Rust projects and report utopian happy-go-lucky times. It's normal for most technologies.
Not necessarily, but it seems not unlikely.
> Is anyone who ever uses Rust a "Rust adopter" who is unable to give an unbiased opinion?
It’s still a relatively new and niche language, so, yes.
> Who would be able to give that opinion, in your view?
Nobody, and maybe that’s my real point, which is why I’d like some metrics to supplement the anecdotes. This especially applies to Rust, but I think also applies to any language.
Note that I don’t mean to imply that there isn’t value in anecdotes. There is.
To be honest, this is what I've taken away from this conversation so far: it doesn't seem like anything will satisfy your desires here.
> which is why I’d like some metrics to supplement the anecdotes
This conversation started with you not liking that the measure of quality is subjective, which is fine. How would you objectively measure code quality though? What metrics would you have preferred to see, other than the ones in this post?
> unbiased opinion
If the engineering manager after putting so much effort into switching to Rust, and trying to convince upper levels, putting their head at risk and after busting balls for months to make everyone learn Rust, comes into the office one day and asks if we love Rust, we the 77% that want to keep our jobs would answer "OF COURSE WE DO!!!!! COULDN'T BE HAPPIER!!!", with a big smile.
Me and all the other developers I know complain endlessly about the tools we're forced to use. We complain to anyone patient enough to listen, including all the engineering managers. No one I know ever got fired or laid off for this.
An increasing number of Android developers in Google are adopting Rust because of the org wide strategy rather than developer passion, so I guess the numbers in 2023 and 2024 would be more interesting to see.
50% of developers think they are as productive in a language they have four months of practice with as they are in a language they have fourteen years of practice with.
50% of developers think they are as productive in a high-performance bit-bashing-capable language as they are in a high-level glue language.
The people in this statistic are switching from languages they have years or sometimes decades of productivity in, and they're switching from languages like python and go and java. I see a lot of programmers who do similar switches never reaching productivity parity with C or C++. 50% of devs getting there in 4 months is amazing.
Anecdata but I found myself very productive with Haskell when I was learning it for grad school, to the point where I knew that if it compiled, it was most likely right.
I had similar experiences though not to that degree with Rust with very little time spent on it in comparison to Haskell. I feel a lot more comfortable sleeping at night over C or Python.
I recently told this to someone. The very next day my Elm code compiled just fine but it had three relatively tricky logical bugs which took me two hours to find and fix.
This was a rare enough occurrence that I remember it. Higher cosmic powers took note of my praise of strong typing and decided to teach me a lesson.
"Overall, we’ve seen no data to indicate that there is any productivity penalty for Rust relative to any other language these developers previously used at Google."
Weird enough we saw more junior devs pick it up faster. They had less preconceived notions and practices to unlearn and more willing to trust rustc. It's just that when they hit their stride they are still less productive than the senior devs.
> They had less preconceived notions and practices to unlearn and more willing to trust rustc.
this is really what I experiences: rust told me a thing or two about coding I never realized. And it took me pretty long to accept that :-)
But rustc's error messages helped it click that there might be a race condition on when the application returns from main and the background thread terminates. So it really needs to have the static life time to be safe. It's a small subtle thing but depending on the application it could lead to real bugs. I've definitely written variations of that bug in C before. A newer dev would have just accepted that flat out without arguing.
A lot of the features Rust offers regarding traits and types can be emulated in C++ with templates, but the way C++ does it is far more obfuscated. Seeing the same thing implemented in Rust helped me wrap my head around what some complicated template nestings were doing ("Oh, this is implementing traits!") in our C++ code.
In fact it's frankly unbelievable, I'd have to imagine these guys are coming from a C++ background.
You can see similar opinions expressed in this thread, like here, for example: https://news.ycombinator.com/item?id=36496654
I think C++ has its own legacy difficulties (which also make transitioning to memory safety tricky), and Rust's choice of borrow checking is only one (sometimes difficult) technique for getting these aspects. But there are almost a dozen other methods out there for getting memory safety besides RC, GC, or borrow checking.
I rather think that these other approaches aren't mature enough yet to enter the mainstream, and we haven't seen them yet.
I actually really like the borrow checker as a tradeoff, I think it makes code much easier to understand and it makes all aliasing bugs impossible. The removal of aliasing bugs is I think an undersold benefit of using rust.
I'd be delighted to see it, because right now I am not aware of any practical way to have memory safe regions without static tracking of borrowing from these regions. It's either that or runtime checking.
I'll see about whether I can push an update. Thanks for the catch!
Given that #2 is talking about people who are all professional programmers and where only a small percentage of respondents previously knew Rust, that's pretty amazing to me.
This sort of surprised me, because rust felt a lot harder for me personally to learn than Go. But data is far more valuable than an anecdote so there you go!
Sounds incredible.
I wish there was more context to these, especially this one. For example, how much of this is perception compared to what they were used to (go?, Python?, C++?)? Or is it "any waiting is bad"?
From an improvement perspective, I'd also love to know why their builds are slow. Is it proc-macro heavy? Do they have wide and deep dependency graphs? Do they have large individual crates? And so on.
This being Google, it probably means something like "this C++ build takes 24 hours locally, but thanks to magical distributed build infrastructure it completes in 10 minutes, whereas Rust build takes 18 hours locally but even with magic does not complete in under 30 minutes, which is too long". That is important to Google, but it is almost completely irrelevant to anyone outside Google.
It is unclear to me whether improving rustc performance is the right solution to Google's problem. It is probable working on Rust integration to Google's build infrastructure is higher ROI than working on rustc.
Yes, this is the problem. Waiting is always bad for productivity. Even a second is long enough to lose a bit of focus. When that stretches out to 10 seconds, it starts getting tempting to, say, check Hacker News and lose your train of thought. I believe that most of the programs that I might be tempted to write in rust could be written in an alternate language with a compiler that is up to 100x faster. Of course this hypothetical language would have to be simpler than rust and would lack many of its features. As it currently stands though, I believe that it will be impossible to make the rust compiler 10x faster, let alone 100x faster so it would be nice if there was more effort to design alternative languages that build on what we've learned from rust to make something better.
Rust itself might as well be considered a highly constrained macro language at this point.
Based on my experience they're overselling how easy it is to learn and underselling the compiler speed.
Compilation is fairly fast these days. I would say it's faster than C++ feature-for-feature, at least for clean builds.
But on the other hand most people could probably learn all of Go in the time it takes to begin to understand the borrow checker.
Faster than C++ is of course very faint praise. C++ is also very slow!
Which we seldom due on most C++ projects, we rather rely on binary libraries and build only our own code.
Also when comparing with Delphi, Ada, D, or even Haskell or OCaml, it isn't that great.
You might feel like pointing out that Haskell or OCaml can be even slower, which is true, however they package multiple toolchains and a REPL, and as of today Rust still isn't as flexible in having multiple toolchains for different purposes.
Depends on the project. Many commercial projects do vendor dependencies and build them too, because you can't rely on the OS version. Especially on Windows or with more niche dependencies.
Just compiling Boost takes 15 minutes - more than any Rust project I've ever compiled.
Not sure what you mean about multiple toolchains.
The long dependency trees are part of it, but usually not too bad and only really bad the first time, since you don't have to rebuild every crate every time (I could be wrong, but it seems that way). I haven't been using it day in, day out though. I've installed a few apps via cargo, and have done some experiments for service applications, and Tauri as well.
As for the day to day use and how painful it is... I haven't had enough exposure to really comment on... it seems "fast enough" but I'm not running compiles often enough, simply because my knowledge and experience aren't really great in Rust. I've looked at it and played with it a few times, then I set it aside for months at a time and every time it's like I'm starting over.
Where I'm working now, there are some serious issues that may result in areas needing better start time on services, so that may be an opportunity to advocate for Rust. I've never really loved C or C++, so I'm less inclined to want to use them.
Some of the stuff people say about Rust reminds me of iOS users talking about Android. "Tell me you are operating from a place of near total ignorance, without telling me that you're talking out your butt".
See: the number of people, here, acting like you can't do raw pointers in Rust, or acting like it's militant woke youngins forcing poor big Google to adopt a safer, productive language.
― Upton Sinclair
Amount of dislike on HN for Rust is frankly unexpected, one part might be response to evangelization, but I've seen more hate on evangelization than actual evangelization. Sure, Rust ain't perfect but like C++ is even more imperfect. So that leaves me with job security in C++.
I do wish to know did Rust impact their velocity and by how much.
This tends to lead to people putting in unsafe code to work around a borrow restriction. I don't do that, but I don't have deadlines.
How? Explain please
> Carbon Language is currently an experimental project. There is no working compiler or toolchain. You can see the demo interpreter for Carbon on compiler-explorer.com.
But this is Google, and the people doing self-assessments were likely influenced by the context of operating in cut-throat bureaucracy where self-aggrandisement is a requisite to career progression within the org.
Whether or not this survey was tied to any performance evaluation (and from the article it's not even clear that it wasn't) the relevant thing is whether the employees knew without a doubt that they weren't going to be compared against one another based on their self-assessment
edit: I'm curious if the people downvoting disagree with my assertion that the survey methodology is flawed, or the assertion that it's unlikely to become as competent in rust in 2 months as you would be in languages you have years of experience with.
Upvoted even though I anecdotally disagree with your perspective based on personal experience. I wrote my first line of rust in March this year (just as a hobby), and now am one of the maintainers of a popular TUI framework (Ratatui). I feel just as productive or more than any of the previous languages I've written code in (over the last 30 something years).
I'm at the point now where I'm productive (took me over a month to even get to that point), but I still feel incredibly slow compared to Typescript. The compilation time doesn't help.
Anyway, thanks for the perspective.
I'm still skeptical that the survey reflects honest feedback given Google's culture, but perhaps I'm just biased from how long it's been taking myself and the rest of the team to achieve a higher level of productivity