Hacker News new | past | comments | ask | show | jobs | submit login
Haiku OS ported and running on RISC-V (haiku-os.org)
304 points by highspeedmobile on May 12, 2021 | hide | past | favorite | 86 comments



Haiku is ideal for embedded graphical systems, since it’s designed as a single user low latency unified system. As the technology stack is unified (from the higher level user space to low level kernel), with a unified desktop environment, unified IPC mechanism, unified file system, unified audio and media system etc, it allows system builders to make lean embedded applications and tools. Case in point - the Medo media editor which does 4K videos, dozens of OpenGL pluggins and binary addons, and the entire multilingual package is 1.28Mb. Crazy efficiency due to being a cohesive integrated system.

Haiku is the ideal system for a Risc-V embedded device (think car console, tablet, media centre, info kiosk etc)


Could this kind of technology also one day make its way to consumer laptops? It seems like low-end computers are still so damn expensive for how shitty they are. I feel like I get more bang for my buck with a Lenovo x220 + a fresh SSD + a 8GB RAM upgrade (running Xubuntu, Mint XFCE, et al) than any new low/mid-market laptop.


Haiku is lacking full featured web browser like Chromium or Firefox. They do have some basic browser available, but its capability is very limited.


Honestly, as someone who uses Haiku casually I find the absence of a beastly browser to be a feature. There _is_ a browser, one featureful enough to read Wiki and such, but not featureful enough to blow away hours and hours on Javascript-heavy social web applications.


also, no Electron apps. all I see is upside


I ran minimalist Void Linux on a crappy dual-core Celeron Chromebox with 2GB (yes, two) of memory as a workstation for a few weeks last year.

It was impractical longer term, but really, as long as I didn't open more than a couple Surf (the Suckless browser, uses the Webkit engine) windows at a time it didn't feel slower than a tricked-out Mac or a beast of a Windows machine, for everything else. Amazing what running as little JS as possible can do. I had to run Docker and VMs elsewhere since they hog memory by design, but it was entirely fine as code-editing and command line workstation. A heavy IDE for, say, Java might have been painful if I'd needed that—which is kind of crazy, because those used to run just fine on well under 1GB of memory, too.

Using Void helped a lot because a bunch of the background garbage on something like Ubuntu, which is really getting out of control, wasn't hanging around eating memory and periodically waking up to burn cycles for unclear reasons. I imagine the benefits of Haiku are similar (big fan of BeOS back in the day, thing's UI responsiveness on mediocre hardware was downright magical)


That's interesting but GP points out that major applications like

> car console, tablet, media centre, info kiosk etc

would be possible but hard without a mainstream browser. Sure, one can develop a native application, but this is not how mainstream development is done these days. E.g. info kiosks basically are browsers that display whatever there is on a (frequently updated) server, and so there's zero maintenance for a kiosk.


There are plenty of companies that make desktop apps. Car consoles (generally) don't use browsers. I am sure that those applications would get on just fine without running Chromium. A car with a web browser is a nightmare.


Don't know about kiosks, but QML is quite popular in applications like in-vehicle infotainment systems.


There is no technical reason one could not port Chromium or Firefox at this point; it's simply that nobody has put the time in to do it (as you can imagine, porting a web browser is not an easy task.)


There's 600+ patches needed to build chrome on BSD, and upstream is constantly churning, which means they need maintenance.

Upstream refuses to import anything related to BSD support, so they need to be maintained out of tree.

And this is for a much more popular set of projects.

"Not an easy task" is a bit of an understatement.


The demise of TenFourFox is relevant: http://tenfourfox.blogspot.com/2020/04/the-end-of-tenfourfox... .


I've built Firefox on NetBSD. I wonder how many patches are required for it.

Presumably the use of rust is a dependency that needs fulfilled for building FF. Looks like there has been some work on getting rust to play with Haiku at both the Rust and Haiku ends:

https://docs.rs/haiku/0.2.0/haiku/

https://www.haiku-os.org/blog/nielx/2020-09-06_rust_on_haiku...



Some of those are for optional features like WebRTC, they are not all needed for a functional browser.


This thread is about fully featured browsers though. I think without WebRTC you can not call the browser fully featured.


Firefox 52 is the last version that can be built without Rust, it is kept in pkgsrc for that reason.


> as you can imagine, porting a web browser is not an easy task

I would imagine, but now I'm trying to figure out why; do they really use that much API surface? Network access should be simple enough, they need audio, keyboard input, and a canvas/surface to render to, but what else?


I think the problem is most modern browsers are a pile of open-source libraries at the bottom, any of which may use architecture-specific and/or compiler-specific native code. Also different systems have slightly different toolchains, for instance, compiling BSD on Linux can't be done because Linux GCC can't handle the divergent BSD GCC makefiles. D'oh! Wonder how that happened, surely they'll fix it soon...


HW accelerated audio/video encoders/decoders (eg for WebRTC), mic, camera, crypto, ...


They don't have any kind of advanced power management, which is a requirement for laptops


> Haiku is the ideal system for a Risc-V embedded device (think car console, tablet, media centre, info kiosk etc)

Haiku predecessor BeOS was present in yet even smaller version (BeIA) on embedded systems and Internet appliances [1] so kinda ahead of the times already.

[1] - https://en.wikipedia.org/wiki/BeIA


BeIA was still only x86/IA32. BeIA was very clever. It used a compressed file system and an ELF compression scheme, so in 8MB/16MB you could fit the entire OS install and still have space free... but it was not ever really embedded in the sense of running on low powered ARM or similar. There wasn't even a PowerPC/POWER port, despite BeOS originally running exclusively on PowerPC up till R3, when Intel was added.


I don't know, I would rather have something like RISC OS :)

https://www.riscosopen.org/content/


cooperative multitasking, no memory protection, can't take advantage of multiple cores, and very much tied to the ARM architecture?


Has Haiku already improved the crash friendly in multihreaded code experience, and software rendering from BeOS?


Yes. There are ports of Qt 5 for example, so you can avoid the BeAPI layer and write code that is portable. You shouldn't, but you can. The Qt/QML code is standard and all the threading and BLooper/BLocker dance is not a "thing" when you use it. But, these days, multithreading is a lot more prevalent so I don't think modern programmers have such an issue with it. I write multithreaded code all the time.


>Haiku is ideal for embedded graphical systems

Well, except there's no accelerated graphics driver (for any device) :(


IME: hardware accelerated UI toolkits are very slow on all but the largest/newest computers.

Rendering a GUI can be done in real time on ancient CPUs. This has been true since the 1980s and continues to be true today. If your app has a performance issue then you should consider cutting down on the animations/special effects/abstractions.


> Rendering a GUI can be done in real time on ancient CPUs.

That depends on the style of GUI. As soon as you introduce any amount of animation (even something as trivial as drag-and-drop, never mind touch-swipe or touch-zoom gestures), hardware acceleration can be quite useful.


> As soon as you introduce any amount of animation (even something as trivial as drag-and-drop, never mind touch-swipe or touch-zoom gestures), hardware acceleration can be quite useful.

Didn’t Windows XP (and prior) not render the window as it was dragged, but just a dashed box? And when you release the window, it’d be redrawn in the new spot.


I believe Windows 95 had only that behavior… but everything after it could render in real time.

In Windows 98, it's under the Control Panel → Display → Effects → Show window contents while dragging

https://copy.sh/v86/?profile=windows98

(The VM here is slower than a machine of the era.)

(I also want to say Transport Tycoon Deluxe's windows were draggable without needing a frame / in real time, and that was in DOS?)


XP rendered the window unless you had a few settings switched (or the classic theme on).


In fact, dragging a window in XP was the quickest way to see if hardware acceleration was properly enabled.

If you saw the window tearing while being dragged, chances were installing an extra driver would make the user a lot happier.


Haiku does some amazingly impressive things software-only, drag & drop and scroll really don't need hardware acceleration. If they do, there's a problem with your code.

Mind you, I can do real-time 1080p mp4 video scrubbing on haiku running on a thin client so...


I was thinking about it and the issue is really the "if there's no compatible hardware llvmpipe will just take care of it" attitude. If you watch these software rendered graphics carefully (or read the code) they make trade-offs that allow everything to work with low input latency and render in real time and sometimes that means "corrupt" looking graphics on the screen which the OpenGL style graphics won't generate (the frame is either done or it gets skipped.)


It's certainly useful but it tends to break and in the context of embedded: constrain hardware choices to vendors known to have serious support problems.

CPUs are more than capable of rendering graphics needed for gestures like you mentioned.


The Macintosh did drag-and-drop with no hardware acceleration in 1984.


>IME: hardware accelerated UI toolkits are very slow on all but the largest/newest computers.

This has been my experience too with an old GMA GPU, I have to keep disabling hardware acceleration in everything I try to run (browsers, electron apps if possible, desktop environments). The UI runs order of magnitude faster on the CPU than on the GPU through OpenGL.


I don't think these anachronistic comparisons make any sense. GMA might do OGL2.1 at best, it won't work well with software with modern assumptions. I wouldn't be surprised if it doesn't use GPU at all and falls back to software OGL renderer.


It tries to do hardware rendering, I had to force software rendering due to artifacts and messed up colors in addition to slow rendering.


You should look at GUI via ImGUI if this is an opinion you hold.

Immediate mode GUI creation. Very straightforward and just requires an OpenGL context (which can be software of course, so all CPU based if you want). A lot of games use it because it can be easily added as an additional layer to your final image (and is represented as just an image).


Sure and ImGUIs are extremely unpleasant to use in power/resource constrained environments.

You should look at GTK+, motif, or TCL/Tk. They're very straightforward and don't turn people's devices into hand warmers.


GTK in resource constrained environment? Uh...

Motif, along with X11 is just completely obsolete platform. (btw, this is what UHH thinks about it: "What Motif does is make Unix slow. Real slow.")

There's some weird GPU hate in this thread that I don't quite understand. Try to render dragging content of 1080p window at 60fps on cheap mobile ARM core, and you will likely blow your whole time budget just on filling the window with solid color...


Like on the PinePhone, using sxmo? Works great.

Ditching the _enormous_ Android stack is an easy win.


That was Brillo's main goal, except when it came to production it got rebooted as Android Things and now is dead.


Is this because they tend to heat up when you use a GL context? And the UI toolkits composite to the screen less often?

I'm not a mobile developer so this is all new to me. Thanks for the info.


I ain't sure if that's strictly necessary for that use case, given how snappy I've found Haiku to be even with limited resources. It'd probably be nice, but having seen plenty of embedded graphics, I'd hardly call them "accelerated" anyway.


For the apps it has this doesn't seem to matter; not everything needs pixel shaders to be performant.


Sounds like it would be interesting for a small-form-factor not-so-smart phone that's still hackable and versatile.


I'm actually quite sad that the Haiku community never managed to marshal the effort to do a Raspberry Pi port. It would have been a killer OS on it, but somehow it never panned out.

(I remember some early discussions in the forums where at first the Pi was dissed in favor of the BeagleBone, and then because of Broadcom, and then, later on, because of "lack of openness", so I'm chalking it down to a mixture of bias and a huge blind spot...)


We were never opposed to a rpi port, but it was much more difficult in the days of rpi1/2, yes, and the BeagleBoard was the initial target instead. These days there is some rpi4 code in the tree, but nobody found the time or motivation to work on it seriously, I guess.


So how production ready is haiku? Targeting the rpi seems like a strong “user-grab”.


Down thread folks were asking to run BeOS binaries on it, sounds like a great opportunity for a couple fun hacks.

* ppc -> risc-v binary translator, much more tractable with fixed width 32 bit instructions

* running this on an FPGA that already includes a RISC-V core like PolarFire and then instantiate a softcore PPC. hack Haiku to run the PPC processes on that core. Extra bonus points to dynamically instantiate PPC cores to meet load, make fast ones, wide ones, slow ones.

What a time to be alive!

https://github.com/antonblanchard/microwatt


> hack Haiku to run the PPC processes on that core

That sounds like the Amiga PowerUp cards with extra steps. :D



The extra steps are what ensures an invariant.


I am reading this paper you posted, https://news.ycombinator.com/item?id=26844036 this is really good.

> the path of least resistance when choosing a strategy is to trust folklore, but the folklore is also suspect.

love it.

The bibliography itself is a work of art.


This is very neat. Though, I still don't have an ARM workstation... Will we leap-frog to RISC-V workstations in a few years?

I guess that new Apple M1 ARM would count but I have yet to get one.


It seems more likely RISC-V will displace ARM on the low-end where margins on things like licencing are the tightest. There are already high performance ARM designs for servers in particular, and that's more applicable to desktops than any RISC-V chip currently available. X86 will give them both a run for their money but it no longer seems completely obvious that everything hefty will still be running on x86 five or ten years from now, for the first time in a couple decades, really.


Also, RISC-V's "Choose the features you want to implement" system is a big advantage rather than a disadvantage in the embedded world.


Do you think adversarial country relations specially from sanctioned countries like China and Russia might count in favor of Risc-V development and adoption?


I cannot comment on Russia, not an area I am familiar with. But I believe that is already the case with China.

An industry planning organization in China has indicated the plan to standardize on RISC-V across the board, with domestically designed and manufactured processors. The intent there is, I'm guessing, to disentangle China from any IP issues, or supply interruptions should trade be disrupted. Chinese media has openly stated as such, for example this article from Sina.com (in Chinese): https://finance.sina.com.cn/tech/2021-01-18/doc-ikftssan7728...

Since it gives me reading practice, I've done a quick and probably slightly butchered partial translation:

Could expanding RISC-V be the cure to China's shortage of ICs?

In April of 2020, RISC-V Foundation's CEO sent an email alerting the Foundation's members. The email stated that "We have now established the RISC-V International Association in Switzerland".

The RISC-V foundation, which has been established now for five years, was originally founded in America. On account of worries about being influenced by political factors, it has moved to Switzerland, a country well known for its consistent neutrality and practice of supporting open source.

[...]

Over the last three to four years, in China's technology circles, more and more people have been discussing the adoption of RISC-V. This follows political factors having an increased influence on the science & technology industry, as well as ARM being purchased by NVIDIA. China's technology companies are ever more worried that the x86 and ARM architectures, in the hands of American organizations, may cut off from access in the future.

In contrast, the open source RISC-V does not have similar concerns. "Regarding domestic Chinese companies, the relocation of the RISC-V Foundation's official headquarters to Switzerland is a very favourable development." said CEO Xu Tao of Saifang Technologies, a domestic Chinese manufacturer of RISC-V processors. "Most significantly, this means that the adoption of the open source RISC-V instruction set, open source software, and public standards can be utilized without any fear of unforeseen complications. It's my belief that this is an opportunity to establish the independence of IP for Chinese processors."


The EU seems to see RISC-V as a solution to its domestic manufacturing, supply chain and security issues. Many German defense conpanies invested in the ecosystem (e.g. Hensoldt sponsored the recently frontpaged RISC-V port and verification of the seL4 kernel). I guess Switzerland is an attractive location for that too.


I imagine some companies that make SoCs are worried about the nvidia acquisition too and would want an alternative


the m1 mini I'm using is quite nice. not perfect, but more than adequate for personal computing and media creation


What does this step really mean? Is HaikuOS usable as a daily desktop environment? Can you use all/some/? Linux applications on it?

I'm really excited about using an ARM based laptop at some point but not sure if the Linux stack will work on it. Is this the same problem lots of Mac software wouldn't run on the M1 at first?


HaikuOS, and the BeOS it is essentially a continuation of, isn't linux-compatible and doesn't have that as a goal. It has its own kernel and OS stack.

Its not really any more usable as a daily driver on RISC-V than it was on any other architectures like x86, but RISC-V support is a milestone because its likely the applications where a very lightweight OS like HaikuOS would be most useful are things that would fit more into the "embedded" bucket than the "desktop computer" bucket - like others in the comments have mentioned, think media kiosks, display systems, hardware appliances, etc. For some perspective, BeOS was purchased by Palm (the handheld computer company) before that whole corner of the industry started dissolving in the smartphone era, so thats the kind of device it was being used on commercially.

(Also, RISC-V isn't ARM - your mainstream ARM laptop is probably going to happen especially now that Apple has shown they can sell well, but RISC-V is still ramping up towards that point of popularity.)


Love this and love the idea of having a RISC based workstation as my daily driver. Any suggestions on where to buy it ?


The best performance this year will be SiFive's "HiFive Unmatched" with four cores more or less equivalent to an ARM A55. Raw speed should be a little quicker than a Raspberry Pi 3, but the user experience will be better because it has an M.2 slot for a modern SSD, a PCIe slot that Radeon graphics cards work in (they demonstrate it with an RX 580), and 16 GB RAM. Pricey at $665 for the Mini-ITX motherboard with CPU and RAM, but it will give the best user experience. Production version is due to ship at the end of this month.

Next is the BeagleV "StarLight". It uses the same SiFive cpu cores but in an SoC from a different company, and with a built in PowerVR GPU. It runs off SD card, but once booted can use USB3 disks. At $119 for 4 GB or $149 for 8 GB it's not a lot worse of a computer than the Unmatched. Due to ship September, I have a beta developer board now.

Allwinner have their D1 chip in mass production right now. It has a single core running at 1.0 GHz with a draft version 0.7.1 128 bit RISC-V Vector unit, which improves the speed often 2-3x. Sipeed and Pine64 have promised Linux boards using the chip in the next couple of months starting at $10 or $12. Probably with only 256 MB or 512 MB RAM at that price. That's very competitive against Raspberry Pi Zero. I've had access to an Allwinner Evaluation Board for a couple of weeks and it's very very nice is they can get boards out at those prices.

All the above are 64 bit and run Fedora, Debian, Ubuntu (I recommend Ubuntu as they have 21.04 images for RISC-V) and others coming.

If you want a full featured desktop experience with web browser loading heavy sites, watching YouTube etc then you'll want the Unmatched, but the BeagleV will be close. Note: such a web browser isn't available yet, but once developers have these boards themselves as daily drivers it will happen.

While the Unmatched / BeagleV CPU is similar to a Pi 3, with more RAM and better peripherals (especially in the Unmatched) I expect the overall experience to be like a Core 2 Duo from the mid 2000s, maybe better for many things with a decent video card. Think: original MacBook Air.


Thanks so much, such a helpful reply.



Wow, getting Haiku on a board like that with proper "tight" hardware driver support would be so cool! Makes me think of the original BeBox and how much I wanted one. It, too, was dual-core! :)



Sifive are already selling some boards: https://www.sifive.com/boards


An announcement of Allwinner D1 based SBCs expected at cheap ($12 has been named before) pricing within weeks, if not days. I'd go for that.


Boards at that price level will have to wait until after the component shortage, but I believe it may be possible to order a higher spec'd D1 board for under $100 within the next few days and receive it possibly before the end of the month.


Probably wait for chinese risc-v workstations/servers to start showing up and get someone to buy it locally for you?


It doesn't look like there is an obvious download link (or I'm too dumb to locate it). b̶u̶t̶ ̶y̶o̶u̶ ̶c̶a̶n̶ ̶g̶r̶a̶b̶ ̶a̶n̶ ̶i̶m̶a̶g̶e̶ ̶f̶r̶o̶m̶ ̶h̶e̶r̶e̶:̶ ̶h̶t̶t̶p̶s̶:̶/̶/̶d̶o̶w̶n̶l̶o̶a̶d̶.̶h̶a̶i̶k̶u̶-̶o̶s̶.̶o̶r̶g̶/̶n̶i̶g̶h̶t̶l̶y̶-̶i̶m̶a̶g̶e̶s̶/̶r̶i̶s̶c̶v̶6̶4̶/̶ Please refer to waddlesplash's reply.

I guess you can try it on Qemu. Might test it later. UTM offers "Risc-V Spike board" among others.


The screenshots in the linked thread come from an out-of-tree development branch; the "official" nightly builds do not contain any of this work yet.

There is a download link in the thread for one of the test builds. It probably will not work on anything besides a VM, though.


Thats right using the nightly build I had a black screen from every machine combination I've tried on Qemu.

Does Haiku features a serial console?


From the screenshot: "Processor: 0 MHz"


Yeah, TinyEMU doesn't have a way of reporting clock speeds back to the guest as they're probably not meaningful anyway.


Holy crap. I want to buy a risc-v computer just to play with this! Do they have a raspberry-pi type device somewhere? (Yes, I'm too lazy to Google for it :)


See the answers to the same question elsewhere, but essentially, 2021 looks to be the year where Linux-capable RISC-V is finally becoming widely available (afordably). BeagleV is the one I'm waiting for but there are plenty more: https://liliputing.com/?s=RISC-V


Wish it will run on Beagle-V




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: