You are on page 1of 15

Dr.

Dobb's TechNetCast

BELL LABS

TNC: We should start by pointing out that we're not at the location where C++ was
conceived.

BS: That's right. This is AT&T Research's new location at Florham Park. I worked for 16
years at Murray Hill, the old AT&TBell Labs research building. But after the trivestiture,
the breakup of AT&T that building went to Lucent. And I'm in AT&T Research, which is
the part of the old Bell Labs information sciences that AT&T kept.

TNC: What culture did you find at Bell Labs when you joined and how did it help
you in your professional work?

BS: The culture I found there, and which I think we retain to a large extent to this day
here in AT&TLabs, was one that was very supportive, very encouraging of young
researchers. We were given more freedom than in most places. The culture also
encouraged a greater degree of realism, a greater degree of knowledge of how code is
really produced and used in both advanced and mundane applications.

For example, my neighbor Al Aho taught me a bit about how to get my ideas through. He
was the world's expert in compilers. And he just happened to be next door.

In the same way, I got a lot of help and support in simply chatting with people like Brian
Kernighan, Dennis Ritchie, and Stephen Johnson. It's a very rich environment and we've
still got about half of it here.

TNC: Has the culture at Bell Labs and the presence of Dennis Ritchie, Brian
Kernighan, Al Aho et al. influenced any specific C++ language feature?

BS: It's hard single out something. After all, this was my environment at the time. Brian
Kernighan was the one who finally convinced me to allow separate overloading of prefix
++ and postfix ++ when he tried to (and eventually succeeded in) replacing all of his
pointers in a conventional program with smart pointers. Stu Feldman was very influential
in the introduction of overloaded operators. Sandy Fraser helped me to think of classes as
interfaces so that I allowed the function definitions to be presented separately from the
class definition, rather than within the class definition as I had been used to in Simula.

Dennis Ritchie and Doug McIlroy famously gave me courage to break with ANSI C in
the issue of the declaration of a function with an empty argument list. To preserve
compatibility with K&R C, I had invented the

int f(void);
notation for C with Classes. Initially, I retained the old meaning of

int f();

as "f()" can accept any arguments that happens to be compatible with f()'s (unseen)
definition". Dennis and Doug called the f(void) style declaration an abomination, so in
C++

int f();

now has the obviuos meaning: "f() takes no arguments." Unfortunately, ANSI C adopted
f(void) so I had to introduce f(void) into C++ for ANSI C compatibility.

You can find more examples in D&E.

PROGRAMMING AND WRITING

TNC: Can you describe some of your writing habits?

BS: The key thing is that I try to write driven by examples. So a lot of the thinking about
what's going into a book or a paper is determined by what is a good example.

If you look at my style, you'll find it characteristic that there's two or three paragraphs,
then a small section of code, then a paragraph or two, and more code, as opposed to huge
code examples on the side and then a huge amount of text that you read. I spend a lot of
time finding the examples.

TNC: How amount of your time is spent writing? Does this vary?

BS: Things really change. Just now I don't spend too much time on writing. I'm sort of
worn out by finishing the third edition of "The C++ Programming Language" and writing
a few papers based on the standard. Now I'm playing with a compiler instead.

If you go back a few years, I was busy building systems. And then you go back a few
more years and you will find me writing again. These things come in waves.

TNC: Do you enjoy writing?

BS: I'm not sure. The idea that I should write a book came from my neighbor at work, Al
Aho. I was standing by his door complaining that the users were asking too many
questions, and they were always asking the same questions. So he says, "Well, you know
Bjarne, it's obvious you need to write a book." That had never occurred to me. So I sat
down and wrote it.

I'm not sure I like it all that much. It's hard work and it gets harder the more you write
because the expectations of your writing and the number of readers go up. So it becomes
really quite serious.
What I did enjoy writing was "The Design and Evolution of C++". I was able describe
what has happened and do things that you can't do in a textbook or an academic article.
You can say thanks to your friends, you can say I did this for these reasons, not all of
which are strictly academic, and you can say I goofed in this particular place and not
there. You can also explain what didn't happen and why.

Such things have no place in a normal textbook. So what I enjoyed writing was "The
Design Evolution of C++.

TNC: Can you contrast the thought process, the creative process that goes into
writing and developing software and thinking about software design? I sometimes
believe that programming is limiting --and I usually get this thought after long
periods of programming. Writing software requires thinking along certain paths,
that of a computer language and the few abstractions offered by the language.
Writing is a more creative effort and the language of course is richer and more
subtle. Would you agree with that?

BS: I think it's more subtle than that. I sometimes say that writing code is more like
writing poetry than writing prose. There is more structure to it.

I like writing code, more than I like writing text.

Writing code provides instant gratification. There's a feedback that you can't get from
writing text. And there's more objective criteria. You can look at the size of the code, the
speed of the code, the cleanliness of the code structure. It is really obvious. That's great.

And you don't have to worry too much about teaching. You have an audience, but it's
better defined.

When I write a book, I know that it is going to be read by a couple of hundred thousand
people, but I don't know those people, I do not know their backgrounds, and that makes it
very, very hard to be clear, concise, precise and comprehensible. I don't have that
problem with code.

INTERNET AND DISTRIBUTED COMPUTING

TNC: How long have you used the Internet?

BS: I think I encountered it first when I was in Cambridge, around '78. It was the
ARPANET back then. It reached across the Atlantic. And of course I've used it more or
less continuously since I came to AT&Tand Bell Labs in '79. Mostly e-mail and file
transfer, in the early days, of course.

I have essentially lived on the Net for the last 20 year. Most of my mail for well over a
decade had been e-mail, as opposed to paper mail. Most of the software distribution in
the early days was through the Net.
TNC: When did you first come to realize that it would have the mass appeal that it
has now?

BS: I don't actually think I realized it until it had happened. I was too busy doing
technical work at the language level and using it as a tool. I was just pleased that it was
getting faster and easier to use. But I was a user.

Even though I have a background in distributed systems, I wasn't too interested in the end
user aspects of it.

TNC: When did it become apparent to you that the network was changing the way
software was not only being distributed, but also engineered?

BS: I'm not sure. Certainly I figured out it was excellent for distributing software back in
the early Eighties. Not just software, but also news related to software. Most software
systems are useless without documentation or teaching. [I was part of one of the first two
or three systems that were on the Usenet, newsgroups, distribution systems and such.]

I'm not sure to what extent there's been a change in the way software is engineered.
There's a lot of talk about it and clearly a lot of Web-related software today is done
differently. But in a lot of software, the application is dissociated from the user interface.
I think this is an excellent idea. It's the way things ought to be done. You can concentrate
on the two aspects separately. [In other cases,] the software is simply running in some
kind of "gadgets" and not in a particular distributed world. And the distribution comes
with shipping data and shipping new code for installation. This is not particularly new.

TNC: You've noted that you have a background in distributed systems. You
specialized in that field in your doctoral work. In "Design and Evolution" you
indicate this period showed you the importance of being able to compose software
out of well- delimited modules.

You started work on "C with Classes", the original C++, to help write a distributed
Unix kernel. To achieve this you needed to break down the problem domain --and
organize programs-- in more modular parts. I think somewhere you use the term
"express modularity".

At the same time, you even gave thought to adding primitives to the language to
express concurrency. This was in the very early days of C++.

So it seems that there was an intent at the beginning of C++ to write a language or
tool that could help in writing not only general-purpose systems, but specifically
distributed systems.

BS: Yes, and I think it has succeeded in the simple sense that a lot of distributed systems
have been written in C++.
One thing that hasn't become very easy is to write really simple distributed systems. One
reason for this is that I believed there were many reasonable ways of doing concurrency,
and I didn't want to limit the language to one of them. So you can't just say "this is the
way concurrency is done in C++".

Database guys have one kind of concurrency. Others use processes and thread packages
that allow you to access the system's native facility in C++. But the language itself
doesn't carry a particular concurrency model with it.

TNC: That's belongs to the realm of libraries.

BS: It's in the realm of libraries. The absence of specific concurrency features in C++
shows my respect for the problem and my estimate of the importance of that field.

Building a single solution into a language is very limiting. I have a book on my shelf here
on parallel programming using C++. It discusses a lot of solutions and shows what people
have done in the field.

TNC: Many similar issues are not tackled directly by the language. The language is
not all-encompassing -- implementation details and specialized solutions are left to
external libraries. This is a core characteristic of C++ and becomes more apparent
when we compare it to other languages.

BS: That's deliberate. I used to express this in the early days was by saying that
computation was a problem that was solved by Dennis Ritchie and that I was dealing in
program organization. Some of the newer, high performance, numeric computation
libraries are an example of this. They seem to be beating Fortran in its own game. My
guess is you can't actually beat Fortran in its own game because machines are designed
specifically with Fortran in mind. But you can score equal --and in some cases win--
through better use of resources, fewer temporaries and such. That's a very interesting use
of the language and a result of the way it was designed. It was designed as a set of
facilities for creating new abstractions, new libraries.

TNC: You have said that you expect to see more libraries to extend the language --in
particular, application-specific libraries. And you expect these libraries to provide a
"new-style C++" interface. What is the situation here, do you believe you are being
heard?

BS: I think that it is early days yet. I hope I'm being heard and we are seeing some new
libraries that have been designed based on the facilities provided by Standard C++. The
Blitz++ and MTL libraries are examples of this (see my homepages). They equal and
outperform Fortran in key numeric application domains. I hope to see many more
libraries that combine elegance with uncompromising efficiency in their specific domain.
TNC: Is it conceivable that a "distributed algorithm generic library" can do for distributed
algorithms what the STL is doing for general-purpose, container-related algorithms? This
library could provide a runtime or environment for distributed computing.

BS: Yes, I think it is possible. That is, I think that it is possible to write a library that
provides a clean interface to containers and algorithms that could be executed using
multiple processors. I would suspect that its user-interface would look rather similar to
that of the STL. There are experiments going on in this direction.

On the other hand, I do not think that a single library could support every form of
concurrency needed by the C++ community. I think we need a few libraries to support
common styles. For example, I suspect we could have a standard threads library, a
standard heavy-process library, and a reasonably standard set of primitives for low-level
concurrency control. Having any or all of those would be a boon to the C++ community -
the primary problem is that the various platform vendors prefer to provide and support
their own (proprietary) solutions.

The standard commitee considered this problem and decided (correctly, I'm sure) that it
didn't have the resources to tackle concurrency while also delivering a reasonably timely
standard. Maybe someone will come up with a great proposal for the 5-year revision.
However, to succeed the proposed library or family of libraries would have to be *great*
(like the STL). Otherwise, the users and vendors of the many established C++ libraries
would have little incentive to cooperate.

It is worth remembering that there are many fine C++ concurrency libraries.

TNC: Distributed systems can easily become very complex. Would it help if there
were language solutions to express some of the concepts of distributed computing
and concurrency?

BS: Certainly it would help if you had building language facilities for the kind of
concurrency you wanted. The problem is that the concurrency required for separate
threads in one address space, for different processes in a computer, for distributed
computing across a local area network, and for distributed computing over a wide area
network, differ. There are different kinds of constraints on the work, depending on
whether you're doing a high reliability -- such as financial or life critical -- transactions or
some Web stuff that is allowed to fail.

My feeling is that nobody has come up with a set of concurrency primitives that serves all
the major application areas well.

TNC: Machines everywhere are now interconnected through the Internet: hosts,
workstations, business systems, even home PC's. But it seems that software, and
more specifically the way software is written, hasn't caught up.

And yet the future of programming at this point is clearly in distributed computing.
BS: I agree that software doesn't seem to have caught up. But I think that a lot of
distributed software is a lot of software that has parts distributed. A lot of the logic could
and should be separate.

In many cases, there's far too much integration between the user interface and the
application. In my opinion, that makes it harder to design the software. It makes software
harder to change, harder to debug, and it makes it harder to take something from one
environment and put it into another.

The area where the interconnection of all the computers is showing itself in the worst
way is in the problems with system integrity, security, and privacy. We've hardly started
scratching that surface in those areas.

TNC: Some the algorithms involved in basic concurrent programming are a level of
complexity above what is found in non- concurrent systems. We can imagine that a
programmer can rediscover a simple sort routine, but it is unlikely that a
programmer will on his own rediscover even some of the simpler algorithms used in
concurrent programming. At the same time, it is desirable that these algorithms and
solutions be readily available and well-known in order to build efficient, well-
designed distributed systems.

Given that a language-based solution does not appear appropriate, what is the best
way to package this logic in order to achieve this objective?

BS: First of all, I don't think the average programmer will invent a really good sorting
algorithm from scratch. There are perfectly good libraries for this.

Once upon a time, it was the job of programmers to invent sort algorithms. These days
almost all programmers use sort algorithms and a fewe experts spend their lives
improving on the state of the art for particular purposes.

We need to make concurrency implicit in a similar way so that the average programmer
will not have to do complicated concurrency operations. If concurrency is internal to a
class, to an object, it becomes much easier to handle. My feeling is that a lot of
concurrency can be packaged so that the end user has only the vaguest idea of how it's
really done.

COMPONENENT BASED PROGRAMMING AND CLASS EVOLVABILITY

TNC: We were talking about distributed systems and component-based software, of


course, is a hot topic these days. What solutions do you find appropriate to the kind
of problems that component-based software development tries to solve?

BS:I'm starting some projects trying to learn more about the topic. Lots of the projects
within AT&Tuse a variety of forms of component architectures. COM and CORBA are
the ones that spring to mind most easily.
The notion seems sound and we're getting the technologies and implementations to do it.
From a C++ point of view, I think one problem is that the bindings are still a bit primitive
and often done with a bias towards C. The standard CORBA binding, for instance, doesn't
support the C++ standard library. And the COM standard way of doing things requires
filling in tables in three places each time you want to do one thing. So I think lots of
improvements could be done.

But certainly the technologies are useful and help when it comes to building distributed
systems.

TNC: Operating systems today and for a while now offer the ability to have shared,
dynamic libraries. This has naturally become the preferred method for distributing
components.

These solutions are inherently dynamic. Bindings in C++, on the other hand, are
inherently static. Dynamically evolving a C++ class without recompilation requires
using idioms and techniques that are not always straightforward.

BS: You can hook things together if you have a platform with a binary interface. Most
platforms are not yet standardized, but work is going on along those lines.

On the Windows platform, if two different C compilers don't have the same C calling
sequences, libraries generated by these compilers will not link together. The same occurs
in with C++. It will work if you conform to the same object layout and calling sequences.
I know that several of the platforms are trying to establish such binary standards.

But as long as we have different platforms and as long as people experiment with new
ways of using those platforms, you're not going to get a perfect binary standard. And I
think that is probably a good thing, because once you have a perfect binary standard,
unless it is perfect you are in deep trouble.

TNC: How about the convergence between the C++ object model and component
object models? Programmers today design classes in C++ and then describe related
objects in a component framework. In many cases, the component object is a
wrapper around the C++ object.

Isn't there a convergence here that can be leveraged?

BS: I hope there's a convergence. But people should realize that there's not one kind of
object that serves all needs. A "printer spooler" can be seen as an object and can be
manipulated as an object in COM or CORBA. A floating point number or a complex
number is also an object and can be manipulated as such. But the implementation of those
operations are critically different. The kind of services you require and communication
with those two kinds of objects is different.
A "point" is not a "printer spooler" and vice versa. So what I would like to see is a more
natural mapping between the C++ language facilities and the object models. There is no
particular reason why you couldn't say this class "point" is just a plain old class and you
have all usual bindings. And this class that happens to be called printer spool should have
all the facilities of this kind of object and communication to it should follow all the rules
of this standard. By standard, I mean industry standard here.

You can do that, [but] it's just too difficult today.

TNC: Is it appropriate for compiler vendors to make specific, proprietary changes


to their compilers in order to support specific component object architectures?

BS: It depends on what kind of changes. I distinguish between benign changes and
changes that are there for lock- in.

I was one of the people who wasn't too upset with "near" and "far" in C. I regarded them
as benign. It allowed me to use and take advantage of some peculiarities of a hardware
architecture. And if I didn't need "near" and "far", I could define them to nothing and my
program was exactly what it was. The semantics of the program was not changed.

In the same way, I could imagine facilities where the only difference between a
component and an object that is not a component would be that one had a base class and a
prefix maybe that associates this object with all these elaborate protocols that provide
reliability, distribution, etc¤ But the semantics of the calls would remain the same and
the semantics of the code would remain the same.

So I would like to see a minimal difference that simply allows the user to control what the
communication protocol is, rather than anything that drives deeper into the language.

In particular, I don't like additions to a language that change the meaning. If it says x+y
you want it to mean x+y as defined in the standard, not something completely different,
because somebody in a header file stated that this should be interpreted on the company
X's rules.

TNC: This question is in part motivated by an attempt by MS to merge component


attributes, as expressed in IDL, into C++ class declarations. Basically, if I remember
well, the declarations contained in .h and IDL files will somehow be merged in the
next release of VC. The compiler also compiles IDL and generates the COM "glue"
classes and C++ code. It also supports an interface keyword -similar to Java
interface or Objective-C protocol classes.

BS: There are several issues here. Clearly, you would like the mapping of C++ facilities
onto a component/distribution model to be as simple, direct, and convenient as possible. I
think this is possible. Certainly, the current state in COM where you have to make entries
into several tables to map a class onto an object is cumbersome and sub-optimal. We
really should write C++ programs in an environment supporting a component model
without having to worry about heavy extra-linguistic machinery, code generators, macros,
etc.

Whether extra keywords are necessary or helpful here is not certain. Certainly, this could
be done badly. However, I see no reason why one couldn't provide a relatively harmless
directive to the language meaning "map this class" without doing violence to the type
system and other rules of Standard C++. That might take the form of a keyword. If you
needed to port code to a different system you could "#define" it away as part of the
porting efford - I suspect an extra keyword would be the least of the worries to the person
doing the port. I would consider it important that the object mapping could be localized
and didn't have wide ranging implications on the semantics of ordinary code (though it
could make the compiler reject certain constructs that couldn't fit the object model - such
as passing pointers).

What would be nasty would be a lot of keywords that needed to be used throughout a
program and had the effect of changing the semantics from what ISO requires.

TNC: Is there a fundamental, conceptual opposition between C++, a static language


where bindings (variables, functions, classes), by design, are resolved locally and at
compile/link time, and the momentum behind dynamic, distributed systems where
checking and binding is left at runtime?

BS:I don't think so. You can build very successful systems based on the idea that you
know that an object has a specific interface and that a linker has verified that the interface
you use is the right one. This notion isn't seriously affected by distribution. See, for
example, the static invocation interface of CORBA. That's almost pure C++ as far as the
fundamental type system is concerned. Where feasible, I prefer the notion of systems
where types are checked/matched at link/load time so that their use can be free of
interpretation and run-time type checking. Naturally, few large systems can be completely
statically checked. However, this again is not a function of distribution.

A linker that correctly handles separately-compiled templates would be a great asset here.

TNC: Class evolution (a term you use in D&E) includes the ability to upgrade the
private implementation and protocol of a class without breaking the client and
requiring recompilation. This has not proved easy in C++ and requires a conscious
design effort (see John Lakos book or MS COM programming idioms). At the same
time, this issue is critical for component-based software distribution. How can some
of these issues be solved in C++, and would it be desirable to have language support
for this, and what would that support look like?

BS: I don't think class evolution is easy in any system or language. In Standard C++ my
favorite technique is to define interfaces within namspaces so that I can force
recompilation and rebinding by changing the namespace (see D&E). However, I have no
idea to which extent such techniques are applicable to large, frequently updated,
distributed systems. I would be quite surprised if there was an easy solution.
My favorite way of avoiding breaking user code by changing base classes is simply to
make major interfaces abstract classes. That way, the implementation details are in the
derived classes and there is no private data to cause recompilation after change. The
public interface can be kept relatively stable.

TNC: Linkage is the second half of the problem ("dynamic class loading"). Could a
dynamic, runtime linker be an appropriate solution? Or perhaps this could be
expressed using namespaces ("distributed namespaces")?

BS: Maybe both. A run-time linker is definitely needed and maybe namespaces or
something similar is the right unit of linking in practical distributed systems. However,
namespaces don't seem to be essential/required in this context, their use is just a good
programming style.

TNC: One solution to all these issues, one that ties with the extensibility of the
language through libraries, may be to build this distributed runtime functionality
through a standard sockets library for C++. Some of the distributed features of Java
or component object models could then be added in a standardized way on top of
this standard library without breaking the core architecture and "spirit" of C++.

BS: That's a good idea. However, it points to the major weakness of C++ compared to
well-financed corporately-sponsored (or state- sponsored) languages. Who would pay for
the development, distribution, marketing, and maintenance of such a library? Where there
is a will, there are answers to these questions. For example, some cooperations make high
quality libraries available (consider HP's and SGI's support of the STL and its further
developments). Then there is the "open source" proponents (for example consider
Cygnus' development of the EGCS compiler and the libraries that goes with that).
However, it is not easy to be a language community without a major "patron/sponsor."

TNC: Do you believe the ground has somewhat shifted under you since work started
on C++? For example, at the time, the emphasis was on static checking, C
compatibility (in particular as far as linkage), performance. These concerns were
important to the success of C++ but they may seem not as relevant today. Would the
design objectives for C++ and the focus be somewhat different if the language was
designed today? What would these differences be?

BS: Probably some ground shifted somewhere. However, there are still many application
areas where static checking, C compatibility, and performance are essential. That is the
reason that C++ use is still growing steadily. The only thing that I can think of that has
consistently outpaced the growth of hardware performance is human expectation.

There will be a strong need for an efficient and flexible general purpose programming
language for decades to come. For now, I think C++ is the best choice for an enormous
range of demanding applications.
A good tool ages less fast than some people seem to think and new tools are slower to
mature.

I'd rather not guess about what C++ would have looked like if I had started to design it a
short time ago. A language is a solution to a set of problems based on a set of principles.
It is too hard for me to try to estimate what problems I would have chosen to tackle a few
years ago had I not had C++. I suspect that my concern for static type checking,
efficiency, and generality would have manifested themselves in some way, as would my
interest in abstraction techniques and modularity.

NEW STYLE C++

TNC: Let me ask you about "new style C++".

I listened to a speech of yours earlier this year entitled "C++ as New Language".
You indicated that the most recent additions to the language made C++ a safer,
higer-level language. And you called on programmers to make an effort to learn and
code in this new style. Can you elaborate on this?

BS: One of the purposes of making changes to the language and to add a standard library
is of course to make certain things much easier and much better. Only if we change the
way we think about things will we get such major changes. Otherwise, we'll just do the
old stuff with a new syntax and we'll get minor improvements.

I think it is possible to use C++ as a higher-level language. One way of getting started
with that is quite simple: use strings as much as you can as opposed to arrays of
characters, use the algorithms in the standard library whenever you can, rather than
writing random code, use vectors, lists and maps rather than arrays. All this makes it
possible to immediately raise the level of abstraction in the code quite a bit.

I regard the standard library as only the first example of a new breed of libraries that will
allow us to encapsulate concepts and make them easier to use. I expect universities and
commercial organizations to produce many interesting libraries.

TNC: Do you see the low-level, C roots of C++ as an embarrassment to


programmers that advocate higher-level, safer programming?

BS: It might be an embarrassment, but not to me. I chose C as a base for my work
because I needed something that allowed me to express the machine fairly directly. And
machines are things with addresses and sequences of objects. Unless you can do efficient
operations on that, you're sunk.

You can have other abstractions of the machine, and some are reasonably successful. But
this is the one I chose. And so at the low level, at the basic level, you have the machine,
or the C roots of C++. You use that to build more interesting things.
We shouldn't [directly] use integers and bits and bytes all the time, and we don't have to.

TNC: You sometimes recommend that programmers learn C++ from the top down,
start with high level constructs and ignore some of the lower level features of the
language. Do you believe that this is possible?

BS: Oh, yes. I believe it's possible because I've seen it done. You can start programmers
off using vectors and strings and lists, and leave things like pointers, especially
interesting users of pointers, to much later. Also leave arrays to somewhat later.

Andrew Koenig and Barbara Moo gave a course at the Stanford summer school last year
that had been completely revamped from the previous year. The net effect was that they
were able to do things at the end of the second day that equaled what they used to do at
the end of the fifth day. That was made possible by a change in approach plus a library
supporting the new view.

EMBEDDED C++

TNC: Can you describe some of the work being done to make C++ more
appropriate for embedded systems? What are the issues and how are they being
solved?

BS: C++ has always been used for embedded systems. I think I wrote my first C with
Classes program to be downloaded into processors that were not part of a classical
computer about 1983 - and I wasn't the first to do that.

Standard C++ is excellent for embedded systems programming because of its


comprehensive support for a variety of efficient programming styles. In particular, it is
often essential that C++ retains C's ability of directly and efficiently manipulate handware
entities such as words, bytes, bits, pointers, etc. The "zero-overhead principle" ("what
you don't use, you don't pay for;" see D&E) ensures that elegant facilities can be
economically built from these close-to-the-machine facilities. Importantly, with the
exception of free store allocation and exception handling the performance of every C++
language feature is completely predictable.

I don't think anything dramatic needs to be done to make Standard C++ appropriate for
embedded systems programming. The proof is that it is actually used for demanding
embedded systems applications. Naturally, a programmer who is concerned about run-
time efficiency, space-efficiency, and predictability will have to understand more about
C++ than a programmer who can afford to ignore such issues. However, that is not
something that is peciliar to C++.

If a design requires a programmer to refrain from dynamic allocation, then that is easily
avoided - even without giving up the benefits of generic and object-oriented
programming. Similarly, I have seen applications where exceptions and certain uses of
standard streams to be avoided, but so what? What is left of C++ after avoiding or
disabling these few C++ facilities, C++ provides a massive advantage compared to C or
assembler. If you can afford C and have a C++ implementation available for your
hardware platform, you can afford to use C++. For many embedded applications, even
exceptions are affordable and useful.

Maybe you are referring to the "Embedded C++" dialect of C++ promoted by a group of
Japanese embedded systems tool vendors? If so, I can add that I consider that subset an
idea whose time has passed. The fear of complexity and inefficiency was never
warranted, and the fear of instability should have dissolved with the stabilization of the
ISO C++ standard over the last few years and its final unanimous ratification last year.

This is not to say that C++ is perfect or that there is nothing to be done to make C++
implementations that are more efficient (in various ways) and predictable. The ISO C++
standards committee has created a sub-group to look into such issues as they relate to
embedded systems developers and to other communities that have performance
requirements beyond what is generally recognized.

PROGRAMMING LANGUAGES AND ABSTRACTIONS

TNC: Why do we insist that programming languages be simple and easy to learn?
Shouldn't we want that professional languages be productive, efficient, expressive,
versatile, safe -even if this requires a significant learning curve? We don't expect
natural languages to be simple, on the contrary we want them to be "rich" and
complex.

BS: Maybe it is simply wishful thinking. Many seem to think that a language that cannot
be used to deliver semi-useful toys in a week must be too complicated. Many professors
have only a term to teach a semi-reluctant student population how to program. Many are
unwilling to accept that serious programming is not for everybody.

I noticed that "The C++ Programming Language (3rd Edition)" is slightly shorter than
my daughter's 3rd year college biochemistry textbook. Nobody seems to find it strange
that you can't be accepted as even the most junior biochemist without being willing and
able to digest that book in a year - and to do the accompanying lab work. On the other
hand, many people seem to object to the idea that a college course providing a similar
amount of information and practical work should be necessary for someone becoming a
junior programmer. It is not C++ itself that takes many months to learn, it is the
fundamental programming styles that it supports.

I think all languages start as "firebrands" wielding their "simplicity" as weapons aganst
the "horribly complex legacy languages." Once reality sets in, the new languages mature
by acquiring facilities and complexities equivalent to other successful languages. This has
happened for every successful language (and for a few unsuccessful ones :-). The
richness of facilities is needed to support the diverse communities served by a general
purpose programming language.
TNC: I've heard Bertrand Meyer say that object-orientation is here to stay forever
because it derives its legitimacy from some natural, innate way of perceiving
breaking down problems. But isn't it legitimate simply because it works? And is it
conceivable that new abstractions in the near future will replace it? What is the
foundation for the abstractions we commonly use in programming?

BS: [Bertrand Meyer] is right on this. When asked, I give a similar answer and usually
quote Kristen Nygaard - one of the designers of Simula and the the main claimant to the
title "Father of OO" - he stated it simply: "something else might come along, but OO will
stay in the same way as addition stayed after multiplication was invented; addition is
fundamental and will be part of any sane system of arithmetic independently of what else
might be added. Similarly, OO will be part of any complete programming system."

Note how this differs from the fanatical and incorrect claims that *all* programming
should be object-oriented and that *all* good programs are OO. I gave a talk entitled
"Why C++ isn't just an object-oriented programming language" on this topic to OOPSLA
a couple of years back. The paper can be found on my homepages.

TNC: Before we wrap it up, let me ask you the unavoidable end of interview
question. What are some of the directions C++ is taking now, post-standard?

BS: I think primarily stabilization. We have to get all the compilers up to a standard and
we have to get the standard library implementations to be uniform, both in the semantics
and the performance they deliver.

The libraries are so designed that you can get very good efficiency. You want that
everywhere so that users can move a library and have the same performance
characteristics.

As usual, I would like to see programming environments that support analysis of code
and examination of code better than the current ones. I would like to see many, many
libraries to support concepts in a more direct way. Concurrency is one way, but also
application concepts. And then we mentioned the component models. I would like to see
higher level bindings to those and I would like to see the bindings to be unobtrusive.

TNC: Is there a post-C++ future for yourself? Can you see it from here?

BS: Probably, but just now I want to write some code using C++ and other things.

TNC: Thank you very much for welcoming us in your office.

You might also like