Bjarne Stroustrup was our special guest.
Video is here.
[00:00:00] Yegor: Hi everybody, this is Shift-M podcast and we have a very special guest today who doesn’t need an introduction - Bjarne Straustrup is the man who created and developed probably the best and the most popular programming language in the world - C++. And today we’ll have questions about programming, programmers, but not so much technical questions, but more about social media aspects of programming languages and so on. I actually prepared some questions on my own and I also asked on Twitter our followers, their questions, so there gonna be two parts of questions, and I hope we will have time to answer all of them. Bjarne, hello.
[00:00:41] Bjarne: Hello.
[00:00:00:] Yegor: So my first question is since you’re the developer of programming language, the very popular programming language, and I also have some programming languages and many people who listen to us right now, they also develop some languages, but they are way less popular than C++, so our question is how do you make a language popular, what it takes for a developer of the language to make it popular, not only to make it, but also make it popular?
[00:01:12] Bjarne: You need a lot of time, a lot of persistence and a lot of luck, and then you need, when you start out, you actually have to try and solve a problem. Most programming languages fail I think because people are not trying to solve a problem, they are trying to make a point that was a statement from Dennis Ritchie, some languages are designed to solve a problem and some languages are designed to prove a point. Very often people just think they can make a nicer language in some way or in some vague way or prove that a notation is more effective and such, by and large that’s not what succeeds. If you look at the languages that are really successful or longish time or large numbers, large numbers, hundreds of thousands, millions and such, they had a problem that needed solving, and they had an articulated idea of how they were going to solve it.
In case of C++ there wasn’t a language that could do really hardcore low level programming and abstract away from it when you need to, because complexity was increasing and management of complexity was important, but despite everybody’s claim performance, efficiency also remains important. The way I used to say it was the only thing that sort of grows faster than computer performance is human expectation, we always want more, and so for more we need the performance from the machine and we need to handle the complexity, all the languages have different kind of strategies and then you have to carry on for a long time, a lot of people work for something on the order of five years and then they get what they really want, it wasn’t a programming language, it was tenure on a university or a PhD or some local fame and then they go and do the next project and something else and the language dies.
Obviously people don’t usually think of it like that and it’s rude, they are trying, with certain idealists and trying to get their ideas out, but as the problems occur and as time goes by, their aims and their motivations evaporate. I have as a rule of thumb about 200 languages are designed every year that may be off by, maybe it’s a thousand and most of them die and so persistence is really important. I get my motivation by talking to users, what are their problems, how they respond to my suggested solutions, and talking to users, seeing that something good comes out of it is what keeps me going, I’ve been going, well, four decades, that’s a very long time but the point is that the things that motivated me keep coming back it actually does something good, it’s really nice to see that little Mars helicopter whizzing around up there and then knowing that there’s some C++ in the flight software. And talking to people who build things that are interesting is important is what keeps me going and I think that is what keeps the language successful, that is it is not dead, it is not boring, it doesn’t bore the developers, new people are attracted because it does something good, and that’s where you get the long-term development from. Then the other thing is that people sometimes get a bright idea a couple of years in and they make a completely radical change that breaks all the user code, they basically cut their users off by the knees - not a good idea, the new language can be much better, but stability is a feature. If you have people who trust you building their careers, their products, their companies on it, you can’t just cut them off, so stability is a feature, it’s a pain in the neck, I mean having to do compatibility over a long term stops you from moving fast, stops you from doing interesting things, on the other hand you have a responsibility, you got these people to trust you in your language, you must support them.
[00:06:16] Yegor: And by listening to users you mean all users or a selected group of users, and let me rephrase my question we know that C++ language and many other languages which are very popular, they have so-called committees which make decisions about what’s next going to appear in the language, and very often people like programmers, they look at these committees and they see that committees are making decisions which are not as solid and strong as they would be if there would be just one developer, one author of the language, like C++ right now is getting new features every few years, and I believe that if you would be probably the only developer of the language, the language would look differently right now. So how do you feel about this, what do you think, is it a good idea in general to have a committee and vote for the decisions instead of having, like, very small group of decision makers, maybe one, two people in the language?
[00:07:12] Bjarne: I first of all don’t think I had a choice. That committee had to be started, the biggest corporations in the computing industry insisted it be started, and they insisted it be ISO, it was IBM, it was SUN, it was HP, they controlled the industry at the time, so I didn’t have any choice and so I had to live with it. Now I think the ideal organization for programming language is a small sort of executive group that makes decisions and a large community that can contribute somehow, you need something in the central that cares for everybody and who has deep understanding and abroad knowledge, people who come up usually has a solution to their own personal problem or local problem or problem in their industry, they have a narrower view and it’s easier to solve things for a narrower community, I’m here talking about something like C++ a general purpose programming language, and you have to look at a larger community.
Getting back to your original question, who do you listen to? The answer is you have to listen to a lot of people and you have to not be in a bubble, you don’t just talk to your friends, you don’t just talk to people who think the way you do, so for years, basically all of the years I worked on C++, I tended to go somewhere usually give a talk, talk to some users and very often in places that I hadn’t been before - car factories, games places, embedded systems places, finance, so instead of talking to the preaching to the converted which is very pleasant and you get a really good time, but you don’t learn in anything or you don’t learn much, you have to go and talk to people who are in different industries, have different interests, what do I know about building aeroplanes or tractors? The answer at the time where I thought of it was not very much, so maybe I should know something about it. You should talk to lots of people, I talk to theoreticians, I talk to engineers, what computer scientists, talk to computer scientists, I talk developers at all kinds of levels, also I give a course at the university every year to keep my hands in so that I know what students think, what learn, what students know and don’t know, learn what students are interested in and what they consider boring so basically I’m very very keen on broad knowledge and you have to make that with a name, you can’t just try to please everybody equally, the language has to have aims, it has to have a sort of articulated set of rules, for what it is, and then you must listen to just about everybody and then you realize that some you can help, some you can’t help. There are people who claims that everybody on Earth should be able to program, that may be true for some definition of programming, I personally am more interested in people who program things I depend on, like the brakes of my car, they better not be everybody, they better be good engineers, same good planes, same with the communication systems, banking systems, programs that calculates where goods go and how, I mean how do you load containers onto a container ship. That’s a huge problem, it’s a seriously difficult problem, you don’t put all the heavy ones to the left, the ship peels over, how do you simulate an engine to minimize pollution, these things are very important and you don’t want everybody to do them. I call the people who can do that kind of things engineers.
[00:13:04] Yegor: Do you think C++ will remain the main programming language for the next two-three decades or it will be replaced by something?
[00:13:14] Bjarne: I don’t know languages by and large don’t die, FORTRAN isn’t dead, and that was the first language like that, COBOL isn’t dead which is the only competition to the other first language, so in two or three decades C++ will not be dead, there will be millions of programmers the question is will it have stored in the sense that new stuff will not be C++ and that the interesting challenges that C++ was designed for will be done by other languages. In some sense I should wish for that because that would prove that something better had come along, on the other hand I doubt it. C++ is grappling with some really seriously hard problems and lots of other languages are easier to use, are simpler partly because they don’t accept those challenges, because they don’t have to, you can always call a C++ program to do the job for you and then complain C++ is too complicated, so I hope that something better comes along i’m not particularly sort of optimistic about that and I am reasonably optimistic that C++ will be vibrant and viable language in a decade or two, I can’t see further in the future than that.
[00:14:53] Yegor: And do you believe that these engineers who you just mentioned, the people who design software for mission critical systems potentially could be replaced by maybe artificial intelligence or something like that and now we have the trend which is called no code you probably have heard about this name low code or no code and some people say that there will be more computers involved in the way when we write code, so right now we type a lot of symbols when we write code, for example in C++ you have to type a lot of things and in the future they say there will be even blocks which you just wire together and they just start working, so programming will be less and less about coding and more and more about wiring pieces together.
[00:15:36] Bjarne: People have been saying this for at least 30 years, maybe 40 years, I am sure, actually I remember somebody coming explaining to me how that was the way it should be done in the very early years of C++, and once you understand the domain sufficiently and once a domain becomes sort of standard, you can do something like that, but who writes the underlying infrastructure, who writes the glue that goes between those bits and pieces, as I said before there’s probably more non-C++ code in what we are using right now in this recording, in my earphones than that C++, but that’s fine by all means, take the things that you can solve by wiring blocks together, there’s actually a fairly nice language for doing that kind of things in embedded systems out of national instruments and use them, but if you can do that, if the domain is stable enough, do it, but the infrastructure and the glue is going to be a general purpose programming language of some sort, and it’s going to be programmed by programmers, and AI, and quantum computing are going to have their place but I actually see them as gadgets you attach to more conventional programming to solve a specific task like is the thing that is moving to the left of my car bicycle, AI is pretty good at that, integrating the sensors in a car or plane it’s not such a problem, as far as I can see.
[00:17:51] Yegor: And what do you think about the ability to use AI for refactoring code, improving code, fixing bugs, people say that that’s going to be the future of programming that we will write code, we will make it the way we make it, and then the AI will step in and look at the millions of lines of code we wrote and then improve it, improve it and improve it.
[00:18:13] Bjarne: You can probably do something like that, but I doubt it’s much better at looking for things that depart from routine, than it is for actually creating something new and building the fundamental structure. AI is very good at looking for abnormal behavior so it can find bugs in that sense, but we still need to build the thing, you still have to design the interfaces, still have to design the system, so I see it as a localized feature as opposed to the center. I’m very keen on static analysis which has slightly similar properties but there you are looking at whether the structures sound. AI could be used to figure out whether you departed from the regularity once you know what is the right thing.
[00:19:28] Yegor: So you don’t think that we will ever have computers writing more, I mean doing more with our code than we do with the code manually.
[00:19:38] Bjarne: I don’t know, I mean if we’re talking a couple of years, the answer is no, if we’re talking about 100 years, I have no clue and neither has anybody else, and so what’s the time scale? I think we have a lot of really important problems for which that is not the approach, they will keep us busy for maybe 15 years. My thesis advisor David Wheeler who’s a seriously smart guy says that anything that’s going to be mainstream in 15 years exists in a lab somewhere today. So I could turn around and ask you in which lab does what you describe exist today at a reasonable scale and if you can’t answer that question, the answer is “Well, we’re talking about further in the future than 15 years”.
[00:20:44] Yegor: You know, the main concern with this approach, this automated AI driven refactoring when the computer makes changes to your code is that programmers will not be willing or will not be happy to see their code changed by the robot, so they mean, yeah…
[00:21:06] Bjarne: I wouldn’t be too unhappy with that but I would want the second part which is verify that it really was refactoring, that the result is the same and I also want some evidence that the performance criteria hasn’t been damaged in the refactoring and so you could have an AIsh thing to come up with suggestions but you need a structured, designed, engineered framework to verify that what came out makes sense, but the thing that would drive programmers nuts is if the system, the AI comes up and says this one is better, use it and then they have to figure out manually, by themselves, without serious tools that it really is better, testing is not enough for that, that’s just the start, you need a really strong guarantee that the AI hadn’t broken it, because AIs are stupid, they are systems, I mean you’ve seen the examples for people identifying a yellow school bus as a pussycat, because somebody injected a bug into the pipeline, you really really want to be sure that what comes out on the other end, if it is code, that’s going to be deployed in a place where it matters to people’s lives or livelihoods, that kind of code you have to seriously test, have verification, the system developer has to understand what’s going on, you can’t have something that is not understood.
[00:23:10] Yegor: You know some people say that if we can formally prove that our changes are still valid, like you said the code still does the same as it was doing before and we can formally prove it, then maybe programmers will trust you, and based on that I’m going to ask my next question. C++, according to my knowledge, initially when it was designed, there was no formal definition of the language, it was the programming language which worked as a language but there was no formal paper which would define what are the objects, what are the classes, what is its inheritance, but still the language is super popular, and now we have some languages which are formally explained but they’re way less popular, so do you think we need this formality in the programming languages or we just should focus on the practicality and usability of that?
[00:24:03] Bjarne: Well something, the previous question, I can’t get back to it, well, maybe I’ll remember it later. The truth is I’ve only proven this code right, I haven’t tested it so don’t trust it too much. Formality as in formal proofs and such are important and so is testing, I mean you could prove the wrong thing, that’s very easy, one of the fundamental problems with proofs is that the specification can be wrong, the specification can be vague, specification can be simplified, to fit the proof model, whereas the real world doesn’t simplify that easily, I mean you if you haven’t got the right set of criteria, the form proof is useless.
So I think verification has its place, testing has its place, just like AI has its place, but that people think that all that matters is the formal proof or that a complete formal proof is an absolute requirement, I don’t believe that. It’s a tool, so I designed the C++ inheritance system and I didn’t have any formal proof, I tried the best I could to be sure I understood it, I have a mathematical training, so it’s not totally off the scale but I certainly wouldn’t claim those of formal proof. 20 years later a bunch of people into proofs and such set out to prove that C++ was wrong and actually the Java was right. They managed to come up with the first formal proof that the C++ inheritance mechanism was sound formally proving that I got it right. It took them 20 years. The point is we had 20 years of actually using that stuff and it didn’t break, so we got 20 years of use out of it before the theory caught up. Theoreticians always think it’s to say it’s the other way that is the proof comes first and then the use, that’s not always the case, sometimes you get lucky and sometimes your thinking and testing is sufficient, so I’ll take any tool I can get to make the language better, including theory, definitely including theory, but I will not take it as a requirement.
The languages that have been done with proper proof mechanisms tend to be designed together with the proofing system, and the proofing system doesn’t seem to apply to other languages that are in any way seriously different from the language that they were designed to do. There’s been formal proofs of C++’s resource management using COCK and of the memory layout stuff again using COCK, so it’s certainly been done, things like that part C++ that’s critical. On the other hand the theory didn’t come up with the RAII and scope-based resource management that’s key to a lot of what C++ is, but it sound, somebody proved it.
[00:28:05] Yegor: You know what I’ve heard about C++ is that it’s a language, I was using C++ for probably 10 years, I’ve heard about that, the language is that it is a good language for good programmers, but if programmers are junior programmers, then it’s very easy in C++ to make a mess because the language provides so many features, it’s so powerful that it’s very easy to abuse it and do something which is not supposed to be done there, and that was okay probably 30 years ago when the programmers were just an elite group of researchers and scientists and like you’re saying there was no claim that anybody has to be able to write code, but now the situation is changing, we have probably 20 million programmers in the world and most of them are far away from being you know researchers or scientists, they’re just people doing their job, they just train for a few months and then they start programming, so don’t you think that now it’s it’s more important to design languages so that they would be more simplistic and by design prevent mistakes instead of giving them a lot of power, people?
[00:29:12] Bjarne: You’re right about the problem, you’re right about the complexity of the language, but I don’t think the solution is necessarily another language. Language is increasing complexity over time and when they broaden their application to move, and so when Java came, one of the major claims was you could rip out 3/4 of C++ and you didn’t need it and at the time I said both on the size growth and on the improvement, so complexity is inherent in the problem area and if you want to stay simple, you mustn’t broaden out in your application domain or in your user popularity. Now C++ went the other way, it’s broadened was good in many areas, good for optimization, you can write elegant code in it and you can make a total mess of it, and so I figured that out quite a few years ago and so I started a project called C++ Core Guidelines for addressing this problem, what does it take to write good, modern, safety ++?
There’s a set of rules that can do that, you cannot get absolute safety if you allow all the constructs in C or C++, so you have to have restrictions, furthermore, if you don’t have library support and trying to write at the lowest level of the language, you get into misses that you can’t prove your way out of, you can’t write static analysis that proves correctness. SoI came up with the idea of a three-pronged approach, you have a set of guidelines, they’re supported by a minimal set of libraries which is mostly in the standard library, and it’s supported by static analysis. And I’ve been working on this for at least the last six years and I’ve just put a note in about how you can use that approach to get guaranteed type and resource safety. It’ll be in the next mailing for the standards committee.
The problem is a lot of people like to fiddle with complicated little things so I’ve been somewhat disappointed about the relatively slow progress of this, but it’s coming. The microsoft static analyzer that ship Visual Studio which is based on some of these ideas and I have sent test cases to the group that’s doing it, I can actually prove that you don’t have any memory leaks and things like that. So we’re getting there and we can get complete safety out of that. And in some areas that’s important, in some areas that’s not important, that is there’s a breadth of applications clearly safety critical things has to be done with greater care than non-safety critical things, things that are on the edge of systems that are subject to attacks has to be really hardened against any kind of violation, systems that are behind effective firewalls or isn’t critical in a short term can be written in a more conventional way, in other words, you can gradually tighten the system, it’s not an either/or, and that’s one reason I’m trying to distinguish between what can be done and what should be done, what the language rule says, what the recommendations are, and I think it’s absolutely necessary to support the guidelines with static analysis because humans are not actually very good at following rules, it’s not just that they don’t want to follow rules which happens, but you know, if there’s too many rules, if they’re too complex from a human point of view, we don’t follow them, we make mistakes, especially late at night or if we’re in a hurry, so we need static analyzers for that.
A lot of static analysis is to apply simple rules again and again and again, we are not good at that, good programmers don’t like to do things 2000 times or code base with half a million lines of code, we’re just not good at that, so we need static analyzers, so we need what I think of as a subset of a superset rule that is first you enhance the language with a set of foundational libraries, and once you have a more ideal language that way, you can cut away and say “Don’t do that lower level stuff anymore except inside the implementations of the libraries”. Again some of the unsafe or hard to prove things if they are encapsulated and localized, you can deal with them, that’s what you do with libraries.
So we have spam, very simple. The main problem with a lot of low-level code is the pointer. The pointer doesn’t know what it points to, it doesn’t know how many elements there are for instance, and this has been known for a long time. I was talking to Dennis Ritchie about this, Dennis Ritchie proposed fat pointers so the C committee once, they got rejected. Fat pointer is a pointer with a size on it, so you know how many elements there are. We designed for the core guidelines, it’s an open source project, you can go on GitHub and find the rules, you can see who contributed to it which is quite a broad set of people including me and friends, some people at Microsoft, some people at Google, some people have Red Hat, it’s a very ambitious project, really, it’s not your simple rule checker, but we designed spam which is basically with a pointer that assigns assets to it, so that you can get range checking if you want, and you can also simplify your code, because once you know the size you could do a range for an array, and you can actually run faster than if you had a simple dynamic check on every axis much faster, you can run at superfast speeds, and that one got into the standard library or aim is to get out of business, because of what we are doing becoming standard.
And so I think that’s the right way to do it, you can also gradually crank up the checking in existing code, that’s being done so that you don’t have to say “I have this code base that my firm, my application depends on, there’s 2 million lines of code plus 10 million lines of testing and support code, let’s throw that away and write it in a modern language”, that doesn’t work, you’re just going to get a new set of problems, you’re playing whack-a-mole with the problems. What you can actually do, it’s not easy but you can do it is to say “Okay, let’s put on the checker and let’s only check that we don’t have any memory leaks, you can do that statically, and let’s go and check all the access to pointers to make sure that they are checked and replace them with the spans and things like that, you can do it gradually, that’s important”, and that was actually one of the reasons C++ succeeded in the first place, you could do a gradual change from the C-dominated world to a better checked, more easily expressed C++ code not by throwing away your old C code, but gradually improving, so this transformation, gradual transformation is important and always works.
[00:38:39] Yegor: And what is the status of this Core Guidelines linter? Is it a ready-to-use tool or it’s in development now?
[00:38:44] Bjarne: It’s ready to use, it’s not perfect, go on to Visual Studio and try it. Now one thing I’m sad about is that I’ve been talking about this for at least five-six years and I had hoped that the static analysis community which exists would have picked up the idea and run with it but they have tended to go for checkers that can check existing code without design changes and going for the low-hanging fruit and you check for something, then check for something more and that’s all you do, and then you wait till you find the last bug and we know what happens to projects like that. The Core Guidelines has a framework, it says if you fulfill this framework, you will be safe, you’ll get these properties and then you can increase the quality of the checking to get close to that, but the point is instead of an open-ended search for the last bug, you now have a check on completeness against the framework, that is feasible, that is principled, whereas the other I think of us ad hoc.
[00:40:15] Yegor: Well let’s hope that the tool will develop actually, I think it will.
[00:40:21] Bjarne: The thing that’s a problem is that currently it’s just in Visual Studio I really want it for every C++ program and every compiler and I have not personally had the time to build the framework for static analysis and then go and do it. Without a time machine maybe I should have done that but I have a day job too and I have to learn about people’s problems and all of that and that stops me from going out of talking and learning loop and just dig in for three years or something to build tool. I hope that others will do it. People have made progress but not as much as I like, but then people point out that I do tend to be impatient, I would like to see improvements.
[00:41:17] Yegor: Okay speaking about time machine, if you can go back to the past and to the point where you were designing the language C++ and then you were making improvements and new features, what would you call the biggest mistake you’ve made?
[00:41:35] Bjarne: I often get questions like that and actually I think for the major decisions C++ is pretty good, and for the details there’s lots and lots of them that are sub-optimal in retrospect, there’s very little you can do even with a time machine, I started building C++ on a computer with 250K of memory and way less than a megahertz, if you tell me through some of the techniques that I’m recommending today that I should take my time machine and go back to and talk to sort of the 1980s’ vintage Bjarne, he’ll tell me I was an idiot, can’t be done, it’s physically impossible.
First you have to invent the modern world and then you can use the modern tools, so there are things I tried that wouldn’t work, I was trying much more localized and in the days I wanted names from a separate translation unit not to escape from that translation unit unless explicitly declared extern, I was forced to take the C rule where they all go out, I wanted things by default to be local, you got it for classes but not for other places, I wanted something like auto back in 1983, I couldn’t get it. But there’s things that I might have been able to do, I’ve been thinking about concepts, the checking of template requirements on its parameters and you know, if I had been able to solve the problem back in 1988 and I was trying to solve the problem back in 1988, I knew it was a problem, I talked to people doing other languages, I talked to people doing formal things, I talked to people doing practical things. I wanted three things out of templates: I wanted to be able to do things way beyond what I could imagine, generality, I wanted performance so that I could compete with abstractions like vector against C low-level code, and of course I wanted decent interfaces. I was the one that put function argument type checking into C. The C crowd usually forgets that, but I needed function argument checking because otherwise I couldn’t write quality code and I wanted overloading, otherwise I couldn’t write generic code.
But anyway, so I wanted those three things and I couldn’t do it, I couldn’t figure it out and so I picked the two first and we got templates which did generality and performance very well, but all the error messages are disgusting because the compiler doesn’t know what it is you’re trying to do, there’s no type checking at the core interface. Now concepts in C++20 and it’s been used for five or six years in an earlier version can do that and can do that well. They still have to improve the compilers a bit for error messages, but it’s common compiler, now know what you’re trying to do, the template says it likes an iterator and it has to be a random access iterator, it can be checked at the core point. Fine. The thing that has to do with time machines is that if I’ve known solution based on concepts, I could have taken the time machine back and I think Bjarne vintage 1988 would have understood it and the implementation would have been simpler than the general on-type system that we had to live with for 20 years. I think it might be the only place where a time machine would really have helped because not only was the right solution, I knew the problem and the solution didn’t require resources to that wasn’t available then, it could be done on a one megahertz machine and a megabyte or two of memory. When people think about gigahertz and gigabytes… in those days a solution would have to fit into mega, not giga.
[00:46:35] Yegor: Okay, what if we have the time machine and we move to the future, what do you think would be the biggest breakthrough, technological or scientific breakthrough in the area of programming languages, in the coming 20-30 years?
[00:46:52] Bjarne: If I’m you I’ll probably be working on it. I’m not a great grand visionary in the science-fiction sense of the world, and yeah, I’d like a tricorder, but you know let’s stay with what we can build, and so I tend to think in terms of I’m patient, I know that things will happen in the longer terms and I’m pretty sure I’ll get ideas in the longer terms, but attempt to focus on the next sort of five-ten years and I tend to focus on the framework I know which is C++, so when people say AI, or quantum computing, I say “Okay, you do that, I’ll try and do the engineering infrastructure that makes it possible”. What do you think most of the AI is actually written in TensorFlow, it’s a C++ library, people write their code in something like Python and they translate into cores to a C++ library that your computer will spend 98-99% of its time executing C++, to do the job. So I have my world and I’m very happy to help people build other worlds, but the grand vision, it’s sort of fairly simple thing, I want to do good abstractions, I want them to be general, I want them to be affordable and if you have a good abstraction, I’ll help you implement it.
[00:48:39] Yegor: And C++, it was a scientific project like a research project or it was an engineering task, for you?
[00:48:49] Bjarne: I had a problem which was I wanted to build a distributed system basically by partitioning a UNIX kernel, so it could run on several computers connected with the communication infrastructure, that was my idea, so if I had succeeded, we would have gotten the first UNIX cluster and much better UNIX budget processes in the mid to late 1980s. But I got distracted because the tool didn’t exist to build that which was low-level code plus abstraction, that was what I needed, I needed to be able to communicate, I needed to build process schedulers, memory managers, things like that, and then I needed components of a distributed system, this bit talks to that bit using this protocol, that’s high-level service attraction and so I got distracted into doing this. So C++ was a result of trying to solve a problem, it was dependent on how you think about it, it was not a research project in the sense that the outcome was supposed to be a paper. It was an engineering project in that the outcome was supposed to be a system.
And also a research problem can define 80% of a total project and then solve it and declare that it solved that particular research problem. If you are doing system building if you’re thinking about systems you have to do 100% of it, you can’t just ignore 20%, I mean you know 90% percent of the project is spent on the last 10% of the code and that applies recursively. If you are doing engineering you have to solve the whole problem good enough, well enough, and so in that sense it was an engineering project. I mean Bell Labs at the time was the world’s best place for practical research and engineering, and I have a hard time actually telling exactly what is researching exactly, what’s engineering and exactly what’s development. When I was a professor in Texas, they solved the problem, half of my salary was paid from computer science, the other half from engineering. So I’m also a half engineer. On the other hand I’m a member of the National Academy of Engineering, so that’s definition of full engineer, but that’s different. But yes, it was a very practical research project aimed at solving a particular problem and then I generalized, that generalization helped a lot of people and it had the side effect that I never got to build my system because I was too busy helping my friends and colleagues with the abstractions and their systems.
[00:52:07] Yegor: That was actually… you just answered my second question, I was interested what you were paid for or research and the papers published in the end, or you were paid to solve the practical problem, and you just answered it, that was the side effect of solving the practical problem, right?
[00:52:23] Bjarne: Yes, yes it was, but Bell Labs was interesting in many-many ways and I spent a couple of years as a manager also, so I knew it from that side also, not that I like management, but you know sometimes you can’t resist the promotion, I think I may have the record in resisting it for the longest but anyway I did, and so somebody explained to me that rewards of Bell Labs is based at the first order of a two-dimensional thing, you have two dimensions, one dimension here is benefit to the company and the other dimension is what’s known as fame dust, that’s reputation science papers and this is delivering systems. The idea person in research in Bell Labs has a curve like that, they do things that has major impact on the real world through building of systems and major impact on science in the future by writing really good papers, that’s the ideal employee. Essentially all employees fit on this curve, they are heavily biased towards one side or the other. I ended up in a lucky spot somewhere where there was a some impact on the world and some impact on science in terms of papers and scientific impact.
[00:54:05] Yegor: You got very lucky probably to get into such a place because you know..
[00:54:10] Bjarne: Oh yes, oh yes.
[00:54:13] Yegor: Because in most places where people work right now, programmers, it’s not actually happening like this.
[00:54:20] Bjarne: The money men are not interested in that, they want to squeeze out profit on the short term, I think it’s very-very sad, people often ask me about where’s today’s Bell Labs and the answer is there isn’t one, again you need long-term planning Bell Labs to build up organizations that can do what I just said require that people can stay there, have a career there, they can go into management and apply it, they need steady funding for decades, the labs lasted as Bell Labs for maybe eight decades, most of its contributions on hardware, the transistor, the charge-сoupled device, the communication satellite, the cell phone system, there’s lots of stuff coming from there, it was an absolutely great place, and yes, I was very lucky. I turned up for an interview and the first thing they told me was they didn’t have any jobs, this is not what you want to hear when you’ve just crossed the Atlantic to be interviewed, but they had some development jobs, so they sent me to a development organization where I gave my prepared speech and it must have been a good speech because they dragged me right back to the computer science research center the next day. Then I survived a fairly grueling interview there and then I got a job. And in Bell Labs research the jobs were very interesting. “Okay, we’ll give you some nice colleagues, we’ll give you an office, you’ll get a terminal to the computer and do something interesting and tell us what it was in a year’s time, thank you, that was it, it sounds easy, but” then you sit there staring at the wall and realize you have to do something good, you look at the doors to the other offices where people sit and they have done spectacular things, you realize you better do something better than what you thought you could do. And you don’t know what it is, you have to figure that out, that’s the main job of a researcher in those days. Find a good project and do it.
[00:56:56] Yegor: Yeah, my final question, that’s a really interesting story, my final question is what was your goal, I mean your life goal at that time when you started and how did it change up to these days or did it change or maybe it didn’t?
[00:57:14] Bjarne: I don’t think it has changed much. I for some reason started out quite early on wanting to do something, wanted to do something good, to help with something, the kind of feeling that to do something is to do something good, other people will benefit from what you’re doing. And I wanted to do something sufficiently valuable that my family didn’t have to suffer while I was doing it, so you needed to impact and profit, I am not rich but I’m not poor and my family hasn’t suffered from my ambition to do something good in the world. I think you feel much-much better if you do something that other people can appreciate, that other people can benefit from and that’s just nice, and doing it on a great scale is a hard work and it takes a long time, but you know when you see the stuff, when you go out to talk to people, you listen to their problems, you see their solutions, you see how they respond, that’s important. There’s a Danish author I was reading a long time ago as a teenager, says he who does not plow must write. Basically if you are not delivering bread on the table for other people, you have to make a contribution to, in his case it was literature, that’s not my skill, anyway, but if you can do both it’s even much better.
[00:59:06] Yegor: You did both, that’s for sure, and we all are very thankful to you for the programming language, for the contribution to computer science, that’s enormous.
[00:59:17] Bjarne: Thank you.
[00:59:20] Yegor: That’s it, all my questions, I thank you very much for attending the podcast, I’m sure thousands of people will watch it and they will learn a lot from your stories.
[00:59:28] Bjarne: Thank you very much. Bye.
[00:59:31] Yegor: Bye-bye.