Hacker News new | past | comments | ask | show | jobs | submit login
Norris Numbers – Walls you hit in program size (2014) (teamten.com)
189 points by dhotson on Feb 19, 2018 | hide | past | favorite | 81 comments



If he had that insight in 2011 then he was pretty late to the party. By then the 1K to 2K line limit for 'beginner programmers' was very well established.

To go beyond that you need a few tricks of the trade, mostly structured programming and avoiding global scope will get you to 10K and up.

After 50K or so you will need even more tricks: version control will become a must, naming conventions matter and you're probably well into modularization territory. Some actual high level documentation would also not be a bad idea.

By the time you hit 100's of thousands of lines you are probably looking at a life-time's worth of achievement by a single programmer, or more likely you are looking at teamwork. And so you'll need yet more infrastructure around your project to keep it afloat.

You can see quite a bit of this at work when you browse through open source repositories at github, those are roughly the points at which plenty of projects and up being abandoned, as often as not through lack of insight in how to organize a larger project as through the demotivation that comes from lack of adoption.


I think you need only one trick to develop projects of any size: find big chunks with narrow stable interfaces and split them off as libraries.

Unfortunately many developers prefer to use a slightly different trick: find similarities between different chunks and extract them as common interfaces. It makes you feel smart, and sometimes even works well, but usually it leads to framework disease. Library style is simpler and scales better than framework style.


In my younger years I wouldn't have believed this, but after having shepherded a project from 0 to 500K lines over 15 years, it's amazing how accurate you are :)


Same here. I learned on the go how to grow a project from 10k lines to 1 million lines, and the insights of how to scale largely had to do with team dev practices and modularization. Eventually I learned to decompose back into multiple independent code bases developed by separate teams communicating through documentation and bug trackers and now I would avoid growing a codebase up to 1 million lines.


agree, but it also depends on programing language.

I do not see how 1K of C and Python lines are same.


As I understand it the evidence is, surprisingly, just the opposite: defect rate per line of code is the same, independent of which language you use.


I don't think that's particularly surprising, and in fact that's the other half of the point: a 2K 'beginner program' in haskell or lisp could do all the work of a 50K C or java program, but because it's so much smaller, you have/need much less support structures and, yes, much fewer total defects.


My intuition is that code with 2kloc "worth" of language constructs would have the same defect rate across languages (and you can get a lot more done in n constructs in some languages than others), but that code using shorter identifiers or symbolic operators should have the same defect rate per "construct" i.e. a higher defect rate per line. But AIUI the evidence doesn't bear this out, and the per-line defect rate in APL or Perl is the same as in other languages.


As programmers we like our lines to have a given level of cognitive load, depending on personal taste, so regardless of the language you’ll end up with on average the same number of language constructs per line. In high level languages those constructs do more than in lower level languages, but since they are already debugged the defect rate per construct is not higher, ergo the defect rate per line is also stable across languages.


> regardless of the language you’ll end up with on average the same number of language constructs per line

I'm saying this isn't true. APL is the extreme example, but languages vary quite a lot in how many constructs you can and do cram onto a single line.


That's true, and for languages even more dense the limits might be lower but it's 'order of magnitude' correct in all cases.


... And then there's embedded programming, which flips this whole thing on its head, and a programmer's skill seems to be measured in how much smaller (more efficient) you can make your code while also keeping and improving clarity, security, maintainability, etc.


Good embedded programmers are good programmers, period. They have to be. Today, we're seeing that even embedded programmers are getting lazy, because 32-bit ARM core parts and plentiful RAM allow wasteful and error-prone approaches to programming.


Well, "good embedded programmers" are a subset of "good programmers" so yes. The point I'm making is that as embedded programmers get more experienced, we are able to do more with less. And so the hallmark of a good embedded programmer is not only writing less code that is more efficient than sub-good embedded programmers, but then doing so without sacrificing any of the code clarity and maintainability and secure patterns that all good programmers strive for regardless of what kind of programmer they are. It's the same concept the article talks about, just flipped on its head because of the environment.

Also incidentally, in my experience, good embedded programmers are less prone to "get lazy" on devices with sprawling amounts of resources than less experienced embedded programmers. Hungry dogs run faster, yes, but for example just because we've been given a generous 32KB of program space doesn't mean we're going to use it all up in a fit of laziness or start cramming in the features.


That is exactly why I got into embedded programming, and then back out again: it seemed like one of the few areas of software development which still offered a real challenge, but it quickly became obvious that Moore's Law was catching up, and that I had arrived near the end of the party.


That's one reason why I love embedded stuff. I see all phone software as embedded stuff by the way, but the lessons do not seem to apply at all!


Phone software as in, like, smartphone software? The phone in my pocket is literally more powerful than the PS4 I bought last year. 8GB of RAM, 128GB of flash storage, 8-core 2.45GHz 64-bit ARM processor.

Meanwhile, a couple of weeks ago, I had to add a new feature to firmware that was already using just under 900 of the total available 1024 bytes of program space.


Agreed. Assuming you're talking about phone apps (as opposed to baseband or sensor control etc); software for phones has much more in common with desktop software than it does with the embedded world, you just have to be a little bit more conscious about your power and data usage.

In my embedded work I pushed a few years back to upgrade the microcontrollers we used from 2kB to 16kB RAM and 16kB to 128kB flash devices (at a grand total extra cost of $2) so I could stop worrying about how we would fit all the new features in.

My personal line for "embedded" at the point where you take responsibility for the OS functionality too. IMHO that's currently somewhere around Raspberry Pi etc devices, which on one level have computing power in line with the smartphone world, but on the other you're still mucking around with the OS yourself. I know it's pure snobbery on my behalf, but I still cringe to think about Node.js running on them... (even though I'm doing it myself for hobbyist level stuff).


Ah! You know, that makes me feel a little better, knowing I'm not the only one sort of balking at the idea of Node.js running on those little dev boards... Even though I've also done it in the past haha! Very funny. :)


It's probably how the assembly guys felt back in the 90s/early 2000s when we started doing more C on the 8-bit micros... except way, way worse.


Yes, and you need to charge it daily because it uses as much power as it does.


Before smartphones, we didn't use to spend hours a day with our phones on. They're not big phones, they're mini computers.


Yeah, historically I have tended to agree with the parent's posts on HN, but here I couldn't disagree more. Our phones are definitely closer to small workstations than embedded devices. Not only because of the available resources, but because of things like application portability. I can take an apk for my phone and run it on every other Android device on the market running a compatible API. The fact that it runs Android in the first place makes it too general purpose to be an embedded device. Now, if the parent had said our old-style brick phones were embedded devices, I would agree with him there. But not smartphones.


This can be true for a beginner, for a developer who never hit that wall before or for a developer without enough talent to learn how to do it right after all. But once you start using the right methodologies, making things highly modular and as much as possible independent from each other, you can go quite a distance before new "walls" arise.

Sometimes I take more time to think of a proper and scalable name for variables and methods than the time it actually took to write the implementation. While refactoring I might do another round of thinking about naming and make sure there are no or an absolute minimum of possible side effects for the implementation, etc..

I'm coding almost 30 years now. I'm still learning and make my mistakes of course, but most of the time they occur because I rush for some reason. Writing good code takes time, especially to rethink what you're doing, refactoring, making the right adjustments so it completes the codebase. Before I complete the beta release of a codebase I've made thousands and thousands of decisions, where only 1 wrong decision can cause a terrible amount of trouble later on.

For me it's a creative process. It goes in waves. I cannot always be a top performer, I've accepted that. When I recognise I was in a low during some implementation I might do a total rewrite of it or apply some serious refactoring(and force myself to take the time for that).

I still experience coding to be much harder to get right than I ever expected it to be. For me a codebase is a highly complex system of maybe hundreds of files with API's working together, not just a bunch of algorithms.


I feel that there is some naiveté in this perspective, although the OP does touch on it somewhat. A novice most likely would write their code in a very monolithic fashion. That same approach fails significantly with larger code bases.

As a seasoned developer I have come to realize that one of the most important things to be a good developer is organizational skills. Unfortunately it seems that ways to organize code bases, including things like naming, mutable state, modularization, cohesion/coupling, etc., are not as well developed or understood in general in our industry as they should be.

While understanding and knowing the right algorithms is important. I sometimes wonder if our emphasis on the knowing algorithms off the top of your head interviewing process contributes to putting the emphasis on the wrong things in software development.


>Unfortunately it seems that ways to organize code bases, including things like naming, mutable state, modularization, cohesion/coupling, etc., are not as well developed or understood in general in our industry as they should be.

True, but I think any attempt to standardize or to make it another engineering discipline would mean the end of high programmer salaries. Just follow guidelines and standard procedure in a book and you'll end up with a reasonable solution that is reasonably good and reasonably reliable at a reasonable cost. Most businesses would jump at that...

And I'd argue that would be a good thing for all the sectors where programming is important, but no good programmer wants to actually join because its not sexy. (coal mining, medical equipment, etc)

> I sometimes wonder if our emphasis on the knowing algorithms off the top of your head interviewing process contributes to putting the emphasis on the wrong things in software development.

Yes but no business actually cares about creative solutions, unless algorithms are core to their business (and even then, other human factors outweigh finding the optimal, bestest, fastest solution). They simply want to use computers to solve a business problem. They want a runner to run from A to B, not an Olympic sprinter who is going to break a world record. Do you know a reasonably decent sort algorithm? Good, just use it. Profiling? Optimizing? That's for Olympic sprinters. Design patterns? Blindly apply a GoF design pattern that approximates your problem, etc etc.


To me, your line of thinking seems to suggest that there is some end-game where a canonical program could be written and all programs would converge towards this with standardization. But that won't happen just by standardizing structure. The structure still holds content which has to adapt to those business goals etc.

I think it is more helpful to think of programming like other writing tasks. The difference in size and structure of programs is more like the difference between tweets, blog posts, shopping lists, poems, political essays, short stories, daily news articles, novels, anthologies, text books, academic articles, encyclopedias, court records, state constitutions, legislation, car repair manuals, and tomes by Russian authors.

Having standards and idioms for structure in writing doesn't remove the need for writing, creative input, nor even for creative exceptions to those standards and idioms.


But I don't agree that programing SHOULD be a creative task. The primary reason being that A creative tasks' output depends quite a bit on the person performing the task. Whereas for an engineering task, less so - as long as the engineer is trained, knowledgeable etc, etc (usual caveats for pedants..)

>But that won't happen just by standardizing structure. The structure still holds content which has to adapt to those business goals etc.

I agree, but i'm thinking along the lines of - more LEGO, less literature. I.e. A move towards a more standardized problem specification process that itself becomes the solution, or at the very least through minimal effort.


> I agree, but i'm thinking along the lines of - more LEGO, less literature.

That's what I see happening. There's already a huge reliance on just connecting together libraries within the confines of a framework for lots of problems.


> no good programmer wants to actually join because its not sexy. (coal mining, medical equipment, etc)

Good programmers don't want to join these kinds of jobs because most normal businesses don't understand software engineering or the market and treat it like a cost center.

I started out my software career by writing vb macros in excel and saved my "boring business" company hundreds of thousands as a side effort to my normal job. Yet I couldn't get any traction to do it full time or to have autonomy or to get a meaningful (30-50%) pay increase to match the dev market salary.


I think our industry can come to _understand_ and _articulate_ the skill set without having to standardize/certify it.

Civil architects can be wildly creative and artful in their industry which understands its own domain deeply.


I would agree that there is an artfulness, for lack of a better term, in engineering. However, there is a lot of science and math i.e. structural engineering that underlies building a structure that is safe and won’t fall down. Admittedly it still does happen from time to time due to mistakes or things that might have been misunderstood, e.g. the Tacoma Narrows Bridge.

The field of software engineering is very nascent and is still in need of the underlying science and math foundations like you find in other engineering disciplines. Unfortunately methodologies like Agile and Waterfall seem to have become synonymous with software engineering.


Right, but when you hire an architect to design a random office building, you're not expecting a piece of art. My point is the vast majority of programming projects are random office building # 23.


Then we’re in the realm of automated or at least codified/systematic process, or we should be, no?


Yes, and we should be encouraging more of that. But as programmers (well to my mind anyway) it nags us when we look at inefficient solutions, even those which are reasonably robost/adequate. We tend to scoff at people using Visual Basic to solve a business problem, when its probably the best language for a large chunk of the problem space.


> I think any attempt to standardize or to make it another engineering discipline would mean the end of high programmer salaries.

I think you might live in a tech city bubble.

Outside of a few tech cities developer salaries are well below engineering salaries.

If development was treated more like engineering then developer salaries would go up. A lot.


As a thought experiment imagine an org that hired for organizational/abstraction skills and landed a single worker good at algorithm design. (This is a distilled/simplistic example but bear to the point.)

10-20% of time in my career building business/productivity application systems have I needed to design & impl a complex algo.

Okay so my fictional team abstracts away that need and my algo guy colors in the deets. (In most cases this is possible; in rarer cases the performance of the algo needs to cross-cut.)

But the point is what you’re saying—one skill is pluggable, the other is not.

Kind of making this less interesting (or more?) is that learning algorithm dev tools/skills is far, far easier than learning the architectural/organizational skills. So no wonder the current prevailing hiring emphasis.


"Absolutely refuse to add any feature or line of code unless you need it right now, and need it badly."

As I've gotten older I've seen the value of this rule. Though it sounds merely negative, it can produce effects that seem little short of genius.


People often try to implement things with a hypothetical future use in mind. It may even be a realistic future, but it is never the less a future that does not need to be dealt with right now. Adding code for things that aren't actually in play means adding something you can't actually test. It means people may abuse what you set up later (and introduce bugs) if the actual future turns out different than you planned. Taken to the extreme, this planning for the future turns one into an architecture astronaut.

https://www.joelonsoftware.com/2001/04/21/dont-let-architect...


Excepting those features which allow you to cull large chunks of code. Those are my favorite features, rare as they are.


In my experience this is often possible by removing duplicated functionality into a shared implementation. That is by exchanging multiple features of linear complexity into a single implementation. Due to being shared, that implementation will tend to have geometric complexity.

That means the duplication has to be highly significant and variations need to be few for it to pay off. ... And you'll better hope you have a good set of tests to verify everything is still working.

In most cases it's still a good idea, but it's still a case where you need to argue that you really need it, so it's no exeption at all.


Sounds like the perennial redundancy vs dependency struggle [0]. I do like the explanation in terms of linear vs geometric complexity though.

[0] https://yosefk.com/blog/redundancy-vs-dependencies-which-is-...


That's one way to do it. Another way is when the requirements change and you no longer need most of a feature.


Those are not exceptions. We all need those now and need them badly ;-)


It's a more fleshed out version of YAGNI: https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it; that page has a number of other similar acronyms; if it ain't broke, don't fix it, KISS, feature creep; all names for the same problem.

Mind you, software developers aren't the only ones that cause this; a product owner / lead / whoever calls the shots is also very likely to keep bolting features onto a product. Either they have to keep that impulse down, or they too need a system to manage it. Also they shouldn't be afraid of culling existing features; what usually seems to happen is that more and more stuff is bolted on, until it's too much and they just rewrite the whole thing from scratch, leading to a clean, lean new application - often missing some features that older users like. Either they do it, or a competitor does.


My experience is that it is very hard to cull features, even with a sympathetic product manager. Sales wants to have as many boxes ticked as possible, so they fight tooth and nail against feature removal, all the while forgetting that simplicity is a feature too.


I'm a novice just building my first 2k+ program and the thing that seems crazy to me is that based on what you learn in intro courses, thinks like namespaces and organization and modularization not only aren't emphasized, but seem like unimportant distractions from the core task of programming

Based on all the blogs and HN comments and comments here, that couldn't be further from the truth. Are these just things you learn from experience / mentors?


> thinks like namespaces and organization and modularization not only aren't emphasized, but seem like unimportant distractions from the core task of programming

There's an analogy I've heard, and I can't remember where. It might be a Feynman quote I'm recalling...

It'd be crazy to start design work on an airplane, if you're hungry and want to go to the corner store a block away for chips. It'd also be crazy to try to walk to China, since it's a long way, and also you might drown. The approach you take to solving the problem of getting from here to there depends completely on where here and there are, and what else you want to carry with you.

I don't use namespaces and build modules for an Arduino project, or a simple command line tool. But for large projects, I literally can't live without namespacing and modularization.

The core task of programming constitutes very different activities at different scales, as different as comparing walking to building jet engines.


You learn them when you need them. They're just extra overhead on smaller projects so they're ignored... until the project grows and they can't be ignored any longer.


Learn to crawl before you walk. Learn to walk before you run.

As far as where to learn these things: there are some books that talk about more broad topics than just how to use a particular tool/language. Those are more supplemental to experience/mentors, though.


I suppose the courses were encouraging to start with something, anything, instead of trying to get everything perfect on first try. And then perhaps the "and then learn these next" part gets lost...


Depends what language you work is. Java for instance is very good at this and everything is always organized, whereas C has no notion of namespace or package.


Does this depend on language? 20k lines of Perl is a different beast from 20k lines of Python, or 20k lines of verbose Java, I would expect. If not, does it suggest we should be aiming to use more expressive languages, that can get more done in less lines?


Just from my own experience, I’d say yes, with qualification—rather like age is only an approximate indicator of maturity, line count isn’t a meaningful metric per se, but a proxy for more important, less tangible things like complexity, navigability, fitting the program in mind, and so on.

In Haskell, for example, I can write software that does more things, using less code, with higher confidence that it’s correct, and much higher ability to change it, than I ever could in C++, my previous top language.

That’s partly the language, partly the fact that it matches my style of thinking, and partly the culture around it—writing reusable code, heavily encapsulating state, and making as much of your program as possible “correct by construction” so that invalid states and state transitions are inexpressible—or at least harder to express, such that the easy path is also the right one. A 500-line Haskell module tends to feel very large to me, on par with 1–2000 lines or so of C++.

I don’t necessarily mean to just evangelise Haskell or functional programming here. Any language & ecosystem with an expressive orthogonal core, which matches your problem space and not only matches but improves your style of thinking, can give you similar benefits if you’re deliberate about taking advantage of it.


I suspect there might be a small difference, like 2-3x, but not a big one like 10x or more. Every large project I've ever seen in any language adopts a very different style than small projects. Large projects automatically start modularizing and using namespaces of some flavor, and the code typically becomes more explicit and more verbose, it becomes sparser compared to small programs.

I think we should be aiming for more expressive languages that can get more done in less lines, but I don't think that will have any affect on this phenomenon of hitting a wall once the complexity is high enough. The main issue is less about how expressive the language is, and more about whether you can read a section of code and easily understand it's limits and constraints and side effects. In non-functional C, it can be really hard to spot pointer bugs, where a more functional Python program might be easier to debug and maintain.


20k lines of Perl is a different beast from 20k lines of Python, or 20k lines of verbose Java

Very good point. We’re all familiar with the “I can do ${lambda} in ${n} lines with ${lang} language”.

It seems to take me 10x more lines to do something in Java vs. Python, and 10x lines to do something in Python vs. Perl. Except that the Java is laborious to read, the Python is nicely readable, and the Perl will never be understood by a programmer.

I had great fun writing Perl many years ago but I used to joke that Perl was a “write only” language.


I had lunch with Eric Raymond here in Austin a 15-20 years ago, and he was lamenting the fact that he couldn't read his own Perl programs less than a year after writing them, so he was rewriting them all in Python. I never get religious about languages or platforms, but Python, which has been described as "executable pseudocode" does a nice job of balancing power and readbility. Idiomatic Perl is powerful, but opaque enough to be effectively unmaintainable. Since errors are pretty much proportional to LOC, it is best to use languages that offer "high leverage": Python, Tcl, and Lua come to mind. Note that C is still almost the only choice for real embedded work - it's unsurpassed at register bit-banging. (Well, except for assembly, but that requires skills that are no longer taught - interestingly, the same ones that make good embedded C programmers...)


he couldn't read his own Perl programs less than a year after writing them

A year? I often lost a half day trying to understand Perl I’d written less than a week before. I’m not exaggerating.

I’m not sure what that says about me as a programmer but I’m terrified to find out.


> Does this depend on language? 20k lines of Perl is a different beast from 20k lines of Python, or 20k lines of verbose Java, I would expect.

Evidence is rare in our profession, but IIRC what little we have suggests no.

> If not, does it suggest we should be aiming to use more expressive languages, that can get more done in less lines?

Yes, it absolutely does. I recommend pursuing that approach.


...and 20k lines of APL is... probably not something any APL programmer has ever reached for any single application.


I hadn't come across this before, and sure enough, looking at the current personal project I'm working on, it's composed of 3 main files of C++, a Windows program, a parser [1] and a custom OpenGL control [2], each is 500 - 600 lines.

For me, I use Stepwise Refinement [3] to get to this point, but to get further, I have to start breaking out a more abstract approach. I found this definition of Structured Programming that puts it very nicely [4].

There's also a perhaps self-imposed wall, where you might trial a solution and implement a prototype, then use that to realize that it no point in pressing on without some serious redesign of a key component. For my example above, the rendering part is old-style OpenGL, which works pretty nicely at 60 fps, but I'm holding off doing more until I can slot in and benchmark a better approach using vertex buffers and shaders, with the goals of enabling me to shape and scale the renderings in an abstracted coordinate system, and scale to rendering hundreds of files instead of just one.

[1] https://twitter.com/watmough/status/962470455037841409

[2] https://twitter.com/watmough/status/965007110391128064

[3] http://www.informatik.uni-bremen.de/gdpa/def/def_s/STEPWISE_...

[4] https://www.encyclopedia.com/computing/dictionaries-thesauru...



I avoid these walls entirely by never using newlines in my code.


That's webscale af


Interesting. My main project is getting near the 20k range. I haven't hit a wall, but it is getting noticeably harder to stay fluent in every part of the program. When I switch from one major system to another, there is a few days ramp-up. Still happy with readability and complexity.


Same. There are some parts of the code that I haven't touched in a few years, and when I do there's a significant effort to refamiliarise myself... and a whole lot of "what stupid idiot wrote this", oh yeah, that was me three years ago. :)

I just finished the Python 2 -> 3 upgrade on it, that was fun.


Are there projects we can undertake to intentionally hit those walls and measure ourselves?


My experience is that exercises don't capture the skills you need - the hard part is understanding the domain and descoping the problem as much as possible, if you start with a well-defined exercise you've already taken out the essence of the problem. The only way to hit the walls is to try to solve a genuine problem - and even then, if you don't hit the wall there's no way to know whether the problem was too easy or you were too good.

Maybe none of the problems we currently solve with computers actually demands beyond-the-wall programming, if we just used tools that were good enough. Certainly most of the time when automating business processes I would find it very hard to believe there was 20k lines' worth of essential, irreducible complexity in the task. It's just that with today's tools most of the lines we write are ceremonial, with relatively little describing the essence of the problem.


Yes, but you'll need to have users who aren't yourself, and after the 20k wall, probably a manager and other coders on your team and a year or two horizon. At this scale, the easiest way to practice this is to do it as part of your work, get involved in larger projects, not attempt it yourself or as a side project.


Like the equivalent of 'memory-hard' problems - 'lines-of-code' hard problems.

I suppose that's close to what https://en.wikipedia.org/wiki/Kolmogorov_complexity is.


Ah no. I meant what task can we undertake to see this in action? For example, writing your own compiler is bound to make you hit the 1k wall. If you're able to write it chances are you have passed the 1k wall some time in your past. That kind of measurement.


Part of the problem is that people who are good at dealing with large codebases tend to write fewer lines of code if they can at all get away with it.


I think it's only something you really encounter in either your day to day work, or some OS projects but those have, I think, a very different day to day structure.


A bit over dramatic. I remember Unreal 3 was close to 2 mil and that's the normal region for games (and has been for a decade) and trust me, gamedev studios are not staffed by seasoned programmers.


Is that just scripting, or does that include the engine?

If it's just game scripting, then I could imagine it's possible to squeeze 2 mil lines out of junior devs. I assume there are very few things that can go drastically "wrong".


"Scripting" vs coding is not a clear cut case also a lot of studios (back in the day that is, since now the unreal model is different and everyone has access to the source) had source access so games would usually make changes to the engine. 2 mil is just the base engine. There's of course loads of middleware and plugins and integration for it all, probably doubling the size and of course the "scripting" which adds complexity like anything else and it relies on support for things in the main codebase. Customising the engine is not a matter of tweaking variables but extending the base classes. And for game logic you might be writing completely separate systems with their own architecture.

Repos were big. Really big. Especially since we're talking about people who will check in FMVs into source control.

Ah... what a horrible world.


> cannot debug or modify it without herculean effort.

This could explain release dates getting pushed back :)


what about paulg onlisp and leveraging nested macros ?

what about kay vpri efforts to reduce a full system to 100K (OMeta)


shard your problem space.


Indeed, micro services can be a pain in the butt to debug, due to communication, but they can help you to view the world as a collection of smaller easily understood programs, rather than one huge monolith.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: