Hacker News new | past | comments | ask | show | jobs | submit login
Actix Web 3.0 (dropbox.com)
156 points by lukastyrychtr on Sept 18, 2020 | hide | past | favorite | 108 comments



Congrats on the release! Actix is super exciting, and it's great to see how the community stepped up to maintain this amazing project.

Is there any effort to bring a Rails/Django like framework to Rust? AWWY [1] seems to indicate that there isn't anything like it in the Rust ecosystem, but I'd be curious to know if there's anyone working on something in that direction. All of the frameworks I've seen seem pretty "bare-bones" by the standards of Django/Rails (then again - it's hard to beat these frameworks in terms of feature-completeness).

[1] https://www.arewewebyet.org/


Rocket seems to be the closest. It has a philosophy of including integrations with other libraries (templating, databases, etc) out of the box. But I think it'll be a good while until it gets close to Django/Rails. To be fair I think Django, Rails, and Laravel are the only 3 frameworks that have this level of functionality.


I haven't used Django, Rails, or Laravel but Elixir's Phoenix is significantly higher level than Rocket.


I've used all 4 and can support the sentiment. Though Phoenix (more the elixir ecosystem) is lacking the community critical mass to compete against the other 3...for now


Phoenix is fantastic.


Personally,I strongly prefer light libraries over comprehensive frameworks. Yes, you have to build up your app, but at least you understand what you're doing. And when something goes wrong you know how to fix it. You can follow the code. This is super hard with a framework like Django.


I totally understand the sentiment, and generally, when I do solo projects or projects with small teams, I also prefer light frameworks.

In my experience, frameworks like Django and Rails shine when you have to collaborate with large teams on solutions though - the fact that there's comprehensive documentation, best practices, and clear standards make it so much easier to work together on a project. I do think Rust brings a lot of assets through its type systems when collaborating, but rolling your own auth/admin dashboard/orm is a hefty investment for most mid-size companies.


I agree. Admin can be complicated but the breadth of authentication is surprising when you start enumerating the features. "I forgot my password" and "remember me" are the easiest ones and yet they're not a hour's job. Authentication is not where a company should invest its money. Use a well proven library and keep solving problems that bring home money.


Yes - plus, something like Rails is great for getting people on board. If the first experience of a new language is being able to get shit done at record pace, it really motivates you to learn more about it as well.


I’ll add in a datapoint from someone that’s used Django almost exclusively for web work over the last 5 years. I’ve had to debug Django many times and most of the code following has been my own. When I have had to dig into Django’s internals their extensive test suite and clear code structure has made it easy enough to find the issue.

Where I do find issue is that the lifecycle for their generic view API can be opaque. The docs have been improved there lately though.

I’ll echo other comments in that Django allows new developers with no experience in Django to be productive quickly. The docs are the MVP there and cover everything from tutorials to in-depth feature write-ups with examples and best-practices.


A Rails style framework would be really nice. I tried to get a Rocket/Diesel/Juniper setup working and it was a lot of futzing with types. One framework would be a lot smoother and get around any orphan rule issues.

The only question is how much Rails-style magic we'd want. We could get pretty crazy with macros but IME the debugging story with the macros used in Juniper wasn't great (maybe it's gotten better?).


> the community stepped up to maintain

More like forcefully commandeered if memory serves.


Unfortunately, there are readability issues in dark mode (Firefox, MacOS) - inline code blocks are unreadable. To fix the issue, run the following in the browser console:

    $$("html")[0].classList.remove("paper-noir")


Anyone else getting this annoying jerking fixed menu at the top when scrolling?

Why do people insist on doing fixed menu bars? Does anyone like them? I don't think I have ever used a website where it improved the experience. Phones have had reactive scrolling for years, so a quick flick and you're at the top. So I don't even know what problem they're trying to solve.


I feel you. Fixed position elements are really an anti-pattern in many cases. This includes fixed "back to top" buttons and the like. They just clutter up space and distract from content.


I mean, even a small “back to top” in the bottom right corner, would be better than a fixed menu at the top. At least with that, its not taking up screen height. Phone screens are small, stop putting stuff in the way that’s not needed.


Is it true that actix-web doesn't actually use the actix actor framework anymore? I see it listed as a dependency in "actix_web_actors", but it doesn't seem to be used internally...


Yes, the new core is now faster than ever. I think they said that actors are awesome but that at this level, for this use case, it was better to start a new design.

They kept the name actif-web with actix in it because it was already quite established and they didn’t wanted to start from zero.

You can still use actix in your projet. It’s still actively maintained.


Was it actix-web whose Git repo was removed during the brawl with the developer?

How it was resolved?


According to the /r/rust thread it seems to be working out.

https://www.reddit.com/r/rust/comments/iqq8k9/announcing_act...


You might note that Rust was conceived by Graydon Hoare and now "the community" owns the project, and the creator is no longer involved. Much like Nikolay Kim (who also was a developer of aiohttp for python, a very impressive FOSS record!)


A lot of people posted in a thank you thread, and eventually the author put the repo back, and if I remember correctly a few people got owner-like rights. Since then I have no idea whether the original author works on it or not.


Bullying open-source developers is not exactly the best course of action if one aims for the better adoption of a particular project. It is good that the situation was somewhat resolved, but I still feel sorry for the original developer, and I am not sure if I would want to use Actix after "the community" (at least some awful parts of it) did such outrageous things to the developer.


This is a pretty one-sided recollection of events. The main issue was caused by actix-web's adoption outgrowing the original author's goals. It should have been a good thing and there were better ways to deal with the situation, like encouraging the community to fork the project. (And what happened in the end is the opposite, he forked his own projects and clearly labeled them as being personal).

You cannot have it both ways, if you create an org on GitHub, a website, speak as a team "we"... seeing adoption will bring some expectations from your users and contributors, that are typically not just to beat other frameworks at TechEmpower benchmarks. I don't think many people want to use a performance experiment.

To be clear, I'm sure he also received a lot of unwarranted criticism, but that happens everywhere. It doesn't justify being hostile to legit contributions.


The "Community" that harassed the developer is probably not the same community that now works on the code.

These mofos from Reddit who storm a GitLab ticket after it was posted there usually do not actually contribute to Open Source projects. They are pseudos who run their mouth but don't do the work. They are the ones who set up the wiki, but never write a single line of code.


Honestly, the ones that set up a wiki but not the code are at least as valuable as the ones writing the code.


At most as valuable. A Wiki without any code doesn't bring much value.


Was it in your opinion the "actix community" that piled onto the developer?

Because from my perception it was more of the "unsafe Rust code is the devil community", which wasn't really using actix much beforehand. With that as a background you would be "punishing" the project for actions outside their core community (and control).


"actix community" and "unsafe Rust code is the devil community" are not mutually exclusive.


The overlap is tiny. Usually the latter group are just groupies. Few of them contribute.


Web developer currently learning Rust and curious about Actix Web here: how does it compare to let's say Node.Js in regard to performance and security? Any feedback on real-world projects to give?


More "mechanical" stuff like typos in payload fields and deserialisation is MUCH safer (and more pleasant with the derives!) due to the benefits of Rust's type system, but broader stuff like authenticating requests and user roles is still up to the user to implement well for a web app to be safe. IMO the language helps a ton here too.

Performance wise, I don't think I need to repeat half the internet here, but I'll just say you'll be blown away by how little RAM a Rust app/container uses for the number of users hitting it. YMMV of course ;)


> how little RAM a Rust app/container uses for the number of users hitting it

I thought that's only for syncronous libraries/frameworks? With async it blows out quite a bit. Everyone except rocket has got async functionality now too.


This doesn't seem true. My mental model says that although synchronous may require somewhat fewer resources when there are few connections, async will win as the number of connections increase.

To illustrate this, consider the cost of spawning a new thread. The stack of a thread is usually a few MB, but lets use 1 MB for simplicity. Then 1000 concurrent connections is 1 GB of memory just for stack space.

With async/await, you don't pay a megabyte per connection because async uses perfectly sized stacks that are typically much much smaller than a megabyte.

Of course you can cap the number of connections, but IO speed puts an upper limit on how fast you can serve a connection, which means that the cap limits the number of connections you can serve per second.

Additionally, I doubt that the overhead at a few connections is large. There's a reason people call async/await a zero-cost abstraction, even if using a runtime such as Tokio introduces some amount of cost.


Just going off what I remember from when async started to stabilise, see the top comment on this reddit thread. It may be be fixed by now or simply just allocated memory instead but many were commenting on it at the time.

https://old.reddit.com/r/rust/comments/cych6a/async_performa...


There was a bug regarding the size of async functions growing exponentially in certain cases at some point, but it should have been fixed now.


Probably worth measuring if that was actual memory consumption or just memory reserved by the kernel. Spinning up several threads will reserve memory but it won't be allocated until you actually try to access it.


I didn't measure any significant difference on memory consumption with async vs sync. I didn't even need to measure perf to notice a difference.


After one year of full-time Rust (starting early 2019, details may have changed since) for web (API) development with Actix, I would not recommend to use it for any serious web project.

The language is awesome and the performances too.

BUT, there is too many things that will distract you from being productive (compared to Node.js + TS or Go): unstable libraries, compile time is atrocious, cognitive load required to read Rust is high (break your flow), toolchain (RLS auto-completion) is weak, CI/CD costs are too high. You will spend most of your time solving Rust's problems instead of your project's problems.

In my opinion Rust is good on a 10-15 years timescale. But your project need to survive 10-15 years...

You can read the details here: https://gitlab.com/bloom42/wiki/-/wikis/engineering/language...


> toolchain (auto-completion) is weak

Based on your link, it looks like you are specifically referring to RLS. With the newer contender rust-analyzer[0] auto-completion is very good. In combination with TabNine[1] (for Rust the Pro version is free), auto-completion is pretty amazing by now.

[0]: https://rust-analyzer.github.io/

[1]: https://www.tabnine.com/


Yes!

I've added a little disclaimer because my experience was last year. The ecosystem is moving fast so some details may have changed (which is a cons for me, too unstable).


> which is a cons for me, too unstable

Totally unlike Node, right. ;)


That's interesting! I had the opposite experience with Rust

With 5 years in PHP, 9 years in node, a few projects in Haskell and sporadic rust usage in the last couple of years: - PHP was great until they added autoloading and everyone started using a half assed package manager - node was great until everyone started using babel and they broke requires / import - python was ok until they split the ecosystem in 2 vs 3 - Haskell keeps getting better (stack, solved record name clashes) because things are rarely broken. It's too bad adoption is limited and there's not much commercial interest - Rust is my new favourite technology, it learned package management from node and functional programming from haskell. New features have been painless and easy upgrades


NodeJS is my primary stack, and I've built toy projects with Rust since 2014, but never used it in production.

Rust offers perfomance improvements in CPU and memory usage of the core application logic, and it's almost never the bottleneck for the typical web apps. Almost always, they're limited by IO, and I don't see any reasons why it would be better on Rust that on Node. Also, Rust offers security of safe access to memory, but that's a problem that exists in C and C++, not in GC-based languages. Finally, Rust offers a great type system, but Typescript has a very good one as well, and offers a much easier transition for Javascript Node developers, as well as a much more machure NPM ecosystem.

Overall, I just don't see how Rust would make much of an improvement over existing state of affairs.


Rust offers a few things over Node in my experience: - Super fast startup - Single binary distribution (you can integrate your static assets with rust-embed) - Minuscule ram consumption (0.5MB for a graphql server) - Cargo build system is much easier to work with than webpack - No mismatches between TS types and JS source

However the big caveat is the documentation and libraries are still quite lacking


Anything but the last one doesn't really matter for my case. NodeJS app startup takes about 4-5 seconds at most. Binary could theoretically be well into gigabytes for it to make any difference. I'm a pure backend developer, so I don't use webpack.

And the mismatch between TS and JS is a pain, but that's why I'm looking forward to Deno.


> Rust offers perfomance improvements in CPU and memory usage of the core application logic, and it's almost never the bottleneck for the typical web apps.

Has anyone actually managed to find that this is the case? Just because your app is stalling on db queries doesn't mean it's "IO bound". Even the most trivial service that's just routing messages between two other services is likely to be radically faster when implemented in Rust vs JS.


> Just because your app is stalling on db queries doesn't mean it's "IO bound".

OK, "IO bound" may not be the best description in this case. Because you're bound not by your ethernet card or operating system drivers, but literally by the work that's happening on a separate, database server's CPU. I should work on my terminology, thanks for the correction.

In my experience, it is usually waiting for DB. Regardless of whether your web app is well or badly written, it would get, at my estimate, would get no more than 5-10% request time improvement even if you made the app CPU infinitely fast, as 90-95% of the time is taken waiting for that database query.

But if you spend the same developer time rewriting all your ORM-generated n + 1 select statements to a single SQL query, you would still spend the same proportion of time waiting for the database, but your overall latency would get 10 times less. I've had some situations where I could get 10-20 time performance improvements by not touching application code at all, just tweaking the database indexes.

So, rewriting it all in a new language where you can, at most, 10% improvement, while you probably have areas where you can get 200-500% improvements seems unwise.


If your database queries are so bad that you can get such big improvements by changing the queries and indexes, then obviously focus on that, not the client programming language. But once you've done that your programming language is responsible for a larger ratio of the total time, since it runs in series to the database query, not in parallel. The benefit, in my experience, is much larger than 10% then.


Almost every web app. Exceptions are stuff like dynamic map tile generation. Even WhatsApp scale runs on a “slow” dynamic language.


I'm not sure what you mean.

> Almost every web app.

No? My point is exactly that almost every web app is not io bound. They're just... slow. They write bad queries, they make bad usage of resources and block on requests, etc.

> Even WhatsApp scale runs on a “slow” dynamic language.

But that has nothing to do with being IO bound. Whatsapp, via Erlang, works by having tons of connections on a single box. And you can watch plenty of talks by them about how they've had to work with BEAM to get Erlang to actually be IO bound.

"IO bound" is a meme. Most people just mean "my queries are slow". Unless you're saturating your NIC, you're not io bound with regards to throughput, and unless your queries execute in ~<100ns you probably aren't IO bound on latency either.


You're right that most web apps aren't actually "IO bound" per se. They aren't making optimal or maximal use of either network or disk IO.

However the important point is that they aren't CPU bound either. Most are "user acquisition bound" (like tens of requests per second peak). Some are more like "developer time bound" (cutting the server farm in half is not worth the cost in dev time).

Either way, using a slow development framework (including language, DB, etc.) that the developers like is a reasonable choice for what they're trying to do. (I personally wish most software was faster, but they have no reason to care what I think.)


I'm in full agreement that you can prioritize other things over efficiency, and that a framework in a slower language can be the right call.


Actually, I thought about it some more. I think a server that spends 99% of its time blocked on IO is "IO bound" even if it's because no one is sending it requests. It's a very different condition than saturating the NIC, but it's still meaningfully waiting on IO. Like, the process has to be spending its time somewhere, and it doesn't know "why" it's blocked.

Maybe the terminology around this could be developed some more.


I think we can at least agree that the terminology is fairly broken here, people very often say IO bound when they mean something totally different, and it is extremely common to hear "X doesn't need to be fast, it's IO bound anyway".


When people say IO bound in this sense, they mean that's the lower bound on what your users experience:

4G latency 50ms

AWS inter AZ latency 1ms

Thread context switch 2us

The response times from things like search APIs even at FANG companies with vast resources and highly skilled engineers are often in the hundreds of milliseconds.


Typescript allows you to have massive gaps in your types. I've been bitten by it so many times. As a huge proponent of types, sometimes I think it would be better not to have it, because of the trust you put in your type system and the mental load of always keeping in mind "oh wait, maybe someone is using this as any somewhere". Node event loop GC pauses are also a problem and they contribute to instability and timeouts.

Rust can make app performance easier to reason about, improve performance, bring a real type system and all of this with a language which is just marginally harder to learn.


Could you please share your stories? I've been bitten by the difference between Typescript compile-time and Javascript run-time in the past, but I've come to learn their limitations and ways to overcome them.

> oh wait, maybe someone is using this as any somewhere

There's a single option to disable any in all of your codebase. I've also found `unknown` type very useful for situations where you just don't care about what type you have, but would've had to use `any` before.


Performance rocks. For services that do significant amount of computation/data processing, Rust is typically order of magnitude faster and uses less memory. If your service is bottlenecked by a database or an external web API, Rust won't magically solve your problems.

Rust has a pretty strict type system, and there's a lot of focus on correctness and reliability. Rust's enums with data are great for handling application state. This helps security, since the language stops you from doing entire classes of silly mistakes.

I've written some sites in Actix and it didn't feel too different from using express.js (with a caveat that I am experienced in Rust, so the language itself isn't an issue for to me).


https://www.techempower.com/benchmarks/ gives you some idea about the performance.


That’s more than a little impressive.


Perhaps, however, according to [0]:

"The Actix benchmarks which are winning are written at an extremely low level, with manual parsing, hardcoding header values as 'static string literals, ignoring HTTP methods, and basically being fast by skipping all of the work a real-world HTTP server would need to do to be effective"

The quote is 9 months old, but I doubt it changed since.

[0] https://www.reddit.com/r/rust/comments/e7xwma/rust_actix_is_...

[1] https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...


There is a very recent, closer-to-real-workloads benchmark: https://matej.laitl.cz/bench-rust-kotlin-microservices/


node is based on various libraries, among them, v8 and libuv.

v8, is a JavaScript engine. An oversimplified way to describe it, is that it has an interpreter and a JIT compiler. Compiled code is faster, but not all code is elegible for that. If you write code that can be optimized by v8, it can run very, very fast. Most of the time v8 will give you decent performance but if you want to maximize performance by minimizing deoptimizations (e.g.: tracing compiler behavior using --trace-opt --trace-deopt CLI arguments)... you will be consuming more time than using Rust.

libuv is a crossplatform library written in C that handles all the heavy lifting for node. Using libuv, node creates an event loop and a thread pool, where asynchronous operations can be scheduled and executed concurrently. It also takes care of file I/O and networking.

node gives you a decent level of performance when you can delegate most of the work to libuv, or built-in JavaScript functions. But when running user-defined code that is CPU bound, node is not the right tool for the job. CPU bound tasks tend to block the event loop, or make ticks long enough so that it becomes a problem.

Blocking or slowing down the event loop causes time sensitive tasks scheduled for future event loop ticks, such as processing events from sockets, to timeout. This can be prevented using worker threads, which are a recent addition to node... but almost all of node and its package ecosystem has been built around the absence of user-facing threads.

node can be used to build robust, highly available, performant and secure systems, but you need to know what you are doing and what the limits are. I think node is not the right tool for every job, but when it is, it can be highly productive and save time.

Security-wise, node is hard to audit. Packages tend to have many dependencies, and many packages are not built with security in mind. JavaScript itself is a languge that has been built with browser security in mind, but also, JavaScript is a language where almost everything is mutable, including built-in types. Prototype pollution is a big problem.

Rust is more flexible, doesn't impose you a threading model, you have access to many concurrency primitives, and all your code will be compiled in a much predictable way. I am not familiar with the security aspects of Rust. At least it seems much safer than C, where it is not hard to create unsafe code.


Fun Trivia Fact: long ago, the Rust runtime (when it existed) had libuv in it too!


I actually sort of miss the early green threaded Rust that I first played with. I understand why that direction was abandoned and I'm happy with how Rust turned out but I do wish for a "better Golang". i.e Rust + a green threaded runtime that lets me write blocking code that is automatically async with M:N threading model. maybe one day Rust can do that too ala Java Loom.


Yeah, there is certainly room for an interesting language in that kind of space.

I don't think Rust can or will ever choose that, Loom works because Java already has a runtime. It'd be a much bigger step for Rust.


One problem that was kinda show stopper for me is sentry support is kinda not working with recent actix versions.


AFAIK the Sentry people are big on Rust and very responsive. I’m sure they’d react well to a bug report. :)


I think they were aware of it

https://github.com/getsentry/sentry-rust/issues/187

I did build a nice rust boilerplate tho if any one is interested

https://github.com/ncrmro/planet-express


An often overlooked but very important point is the ease of use of a programming model.

NodeJS has build it concurrency model on callbacks which make non-trivial application difficult to debug and evolve.

Since then new paradigms have emerged such as promises and structured concurrency (https://vorpus.org/blog/notes-on-structured-concurrency-or-g...).

Rust has build it concurrency model with those new paradigms in mind, mixing them with it awesome ownership model. In the end Rust brings to the table much stronger guarantees on your code consistency compared to NodeJS, which can save you a lot of time and energy on complex concurrent system ;-)


> NodeJS has build it concurrency model on callbacks which make non-trivial application difficult to debug and evolve.

JavaScript has had async/await for four years (and promises for even longer than that).

JS has its issues but it's not at all helpful when criticism is badly outdated either.


Modern node development uses promises as well.


Yeah in fact they were ahead of the game, not to say I don't prefer Rust, but for a long time promises were a pain in Rust compared to node.


They weren't more painful in Rust because of them being implemented poorly or anything. Rust's memory model makes referencing data across scopes difficult. In a GC'd language, your closures can just reference everything willy-nilly.

Also, I'm not sure, but I think Scala had Futures before bluebirdjs existed, which was the defacto Promise for JS, AFAIK.


I'm not sure if they were first and they were API incompatible with modern JS promises, but jQuery had deferred objects in 2011, 2 years before the first Bluebird commit on Github.

Bluebird wasn't the first JS Promise library. It became the defacto library because it was faster than everything else.

Edit: https://en.wikipedia.org/wiki/Futures_and_promises has a history


Nice. I am certainly not up on my JavaScript history. Bluebird was the first I used and it was still gaining popularity at the time.

I do know that promises existed a LONG time ago, I was just referencing languages that still are popular-ish. I thought the Scala implementation was pretty old, but it looks like Java actually has everyone beat.


I've just started to rewrite my NodeJS-based prototype API using Actix.


Congratulation this seems really cool ;-)

Actix was already a great project but the community-driven switch mentioned here is a great step toward better support and broader adoption.

BTW is the fact the anouncement made on Dropbox paper has something to do with Dropbox as a company ? (I heard they are investing heavily into Rust)


Its just what the devs used to collabbrate and then they published it in place rather than moving it, as reported in /r/rust.


>> Extensive usage of the projects helped us resolve memory leaks.

Hu? I thought Rust was memory safe? At least that's how it's promoted.

What's the point of Rust when one of the flagship project (Actix) had memory leaks for years?


Under Rust's definition of memory safety, a memory leak is not unsafe. Data will not be corrupted or accessed improperly.

Also Actix has had a strained relationship with the Rust community for a number of years, because of Actix's cavalier attitude towards `unsafe`. The result of which is that the project is under new leadership, and safety is given a stronger emphasis. I also never really considered Actix a "flagship" project of Rust. It was popular and promoted itself, but it sort of set itself apart from the mainstream Rust community.


As lmkg pointed out, leaking memory is considered safe in Rust. There are a number of factors here:

- Leaking memory doesn't cause Undefined Behavior. It might eventually cause your program (or your whole machine) to lock up or crash, but there are many types of bugs that can do that.

- Categorically preventing memory leaks is difficult without a garbage collector. The biggest problem is that you can use reference counted smart pointers like Arc<T> to create reference cycles. Preventing you from doing that at the language level would be too constraining for a lot of real world programs. (In practice, those programs need to break cycles manually using "weak" references.)

- The difference between "reachable" and "unreachable" leaks isn't so big in practice. For example, if I'm caching some data in a global HashMap, I might forget to clear that map, or I might not clear it frequently enough. If the map ends up consuming all of my memory, the effect on my program is the same as if I'd just leaked all that memory. Either way, I need to debug what's going on and make a code fix. (In fact, at least some of the leaks that the Actix post is referring to are this sort of reachable leak.)

For all these reasons put together, Rust allows safe code to leak memory. In fact, there's even a safe standard library function for doing it (https://doc.rust-lang.org/std/mem/fn.forget.html). But it's quite rare to run into unreachable memory leaks in practice, because destructors almost always free memory for you automatically.


I unreasonably avoid frameworks that have O(n) routing. For some reason I just feel like the standard should be O(1) with the number of routes. There is a crate https://github.com/http-rs/route-recognizer which does this but most frameworks roll their own which just loops over the registered routes and takes the first one.

It can be slightly less flexible, however in most cases if you are using this sort of flexibility your URL structure is too confusing anyways. Since you can group together an number of regular expressions and match them in O(n) with the length of the URL you clearly have a lot of flexibility available.

Of course I should probably check the real benchmarks because the NFA based approach almost certainly has more overhead.


Is the route count unbounded? O(n) = O(1) for constant-bounded n.


Yes, this is why my discomfort is "unreasonable".

However I feel bad knowing that every route I add is slowing down my app. And I have the impulse to sort my routes by popularity to improve performance. I wish I could just trust the framework to match "as fast as possible".


If it makes you feel better, it's honestly probably faster for a certain number of routes or fewer.


The benefit of doing “O(n) routing” is that you don’t have to do complicated things so that a certain route gets more precedence. Real websites often have legacy URLs or one-off URLs for marketing, and being able to give these routes higher precedence by placing them higher in the route list is just easier. It’s also easier to reason about the router testing each route in sequence rather than attempting to understand how it assigned priority based on some leaky heuristics.

> if you are using this sort of flexibility your URL structure is too confusing anyways

A lot of the time, we don’t have a say in URL structure, unless we’re happy to break old URLs all the time.

Also, can we not use Big-O notation for quantities which never break 10,000, let alone 1,000. The cost of checking all the routes in order is going to be negligible.


How many routes do you have? I feel like you would have to run a huge amount of endpoints for this to be an issue


One of these days I'll write documentation for the toolkit I've been working on, which has a nice solution that satisfies this criterion.


I actually really like Actix. Switched to using Rocket because of some controversy in the Actix stuff that I didn't care too much to keep tabs on. Has anyone used Actix recently? Can you use it ergonomically on stable or do you need nightly? Would love to hear opinions from users here.


AFAIK Actix always worked on stable Rust.

I have a hobby project[0] running on Actix. A few days ago I updated from 2.* to 3.* and apart from one minor bug in a middleware (which seems to be fixed by now), everything worked without problems.

As for the ergonomics: I used Rocket about 2 years ago and it felt more ergonomic at the time but Actix has come a long way. I think it still hasn't reached the level of Rocket, but it's getting there.

[0]: https://hitsofcode.com


Ah, it's been so long I've forgotten. Thank you!


> Switched to using Rocket because of some controversy in the Actix stuff

That's the first time I've heard someone making a technical choice solely by something that is not technical.

Did you change from Actix because you're afraid that the controversy would eventually hit your own written software? Or just felt bad supporting something that others didn't want people to support? Would love to hear the reasoning behind the change.


Oh, I don't care if there's all the politics, but the code disappeared. Using a dependency that disappears like that didn't make me happy. It was totally a technical decision to try to avoid an evolutionary dead-end.

I wasn't even upset about the unsafe. I didn't care.


To be fair, I believe the controversy began with disagreements over the previous maintainer's use of unsafe and spiraled from there. So not entirely non-technical.


At my job we use Actix Web for a server deployed in prod. We never use Nightly, I don't think Actix Web depends on any nightly features. All stable :)


Excellent, thanks. I far prefer using stable, to be honest.


I want to love Rust but my biggest pain point was trying to produce a simple enough REST web service, initially with Actix, though I think I was trying to use it at a time where the docs were not in sync with what was released at the time. Not sure if this has changed, but I'd be willing to give Actix another try again if that's the case.


You should give rocket a try! The developer ergonomics are a lot better. Assuming this is you learning about rust.

Because performance-wise rocket is slower. But they're working on adding async support, after which they should be showing a lot better numbers.


Is it compiling under the stable compiler though? I tried switching compilers to try out Rocket and it just didn't work out very well for me, compiler errors everywhere that I couldn't seem to find a reasonable answer for. I'm not a fan of switching to anything that isn't considered stable unless I absolutely have to.


I believe async support is already on master as well as stable support, but there hasn't been a release yet.

There were two unstable features that Rocket was waiting on (which have been stabilized some releases back). Using nightly rustc is fine for the most part, but I wouldn't want to rely on it for production services.


I think the docs are much better than they used to be. Just my personal experience, but I found Actix Web very easy to get started with for my job. There's also a Gitter chat room where people are pretty responsive.


I will have to give it another try then if that's the case, good to hear the experience has improved. I think what my issue was when I tried it was the docs was out of sync with the version mentioned in the readme and so it became more confusing trying to match things up to a functioning application.


> A powerful, pragmatic, and extremely fast web framework for Rust

In case anybody, like me, didn't know.


Does it mean that Dropbox uses Rust?


This post is hosted on Dropbox's Paper product, which allows for publishing by 3rd parties (anyone with a free Paper account).

This post does not have to do with Dropbox engineering, which I do not believe uses Actix. But Dropbox does use Rust quite a lot - their storage layer and sync engine, among other things, are implemented in Rust.


For one example of how Dropbox does use Rust: https://dropbox.tech/infrastructure/-testing-our-new-sync-en...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: