24 июля 2019 года    
Среда | 09:28    
Главная
 Новости
Базы данных
Безопасность PC
Всё о компьютерах
Графика и дизайн
Интернет-технологии
Мобильные устройства
Операционные системы
Программирование
Программы
Связь
Сети
 Документация
Статьи
Самоучители
 Общение
Форум







Разделы / Программирование / Java

The Java Language Environment. A White Paper

JAVA - A DRAFT COLLECTION OF NOTES

Harold Thimbleby

http://www.cs.mdx.ac.uk/

Middlesex University

LONDON, N11 2NQ

Java is a popular programming language. This paper discusses its design and identifies a range of flaws and curious features.

To be successful, a system must have a user base. The users must get enough of their tasks achieved within reasonable limits of performance, and given reasonable limits of training. Java certainly achieves this, and has become a very popular programming language. By summer 1997, it was claimed to have half a million programmers. By June, 300,000 programmers had loaded upgrades to their Java environments. The language is taught in 200 universities, and has applications from smart cards to the Mars Lander.

In many ways, Java is a classic computer application. Its design and introduction required trade-offs. Its design had to balance being different and "better." It had to successfully draw on enough users to make it a viable product. It needed to have users. Java closely resembles C and C++, so existing programmers find it familiar. Yet these languages have problems and ambiguities, so Java made changes to have advantages over them.

This paper is concerned with non-technical trade-offs in the design of Java. As an example of a technical trade-off, Java has a security model, which leads to it being interpreted, which in turn reduces its speed. This is obviously a design issue, but improvements to Java's speed do not require significant user retraining. In contrast, changes to the language as a notation do require user retraining. This paper is concerned with Java as a notation. We shall argue that it has unfortunate and avoidable weaknesses. It is very difficult to discuss design because of variations in assumptions. For example, many aspects of Java's design have been inherited from C. Notationally it is very similar. But a language such as BCPL (on which C was based) is notationally different and more relaxed -- in many contexts, semicolons are optional. We could argue, from the BCPL experience, that typing Java would be made easier if it had adopted the BCPL conventions for optional semicolons. Such comparisions really require empirical evidence about which notations and styles programmers find easier and more reliable. Unfortunately, there is very little knowledge in this area, and what there is (e.g., that guarded commands are easy to understand) conflicts with tradition (none of BCPL, C nor C++ support guarded commands). Therefore, we shall try to critique the design of Java by taking it on its own terms: if part of the language works this way, and part that way, for no obvious reason, then this may be a cause for critique in this paper.

Unfortunately, we are arguing from hindsight: any changes now to Java would compromise it, and almost certainly lose it its massive following. Practically, this paper will warn Java programmers of certain weaknesses in the language: they may therefore be able to take suitable precautions. Also, this paper will encourage future programming language designers to consider their language design more carefully: Java has so many users, even small improvements -- at small cost to the designers -- would have had enormous, indeed world-wide, benefits for the huger numbers of programmers.

For brevity, and sufficiently to make our point, we only discuss Java as a programming language; we do not discuss package details, such as the lack of control over garbage collection or about the design of the abstract windowing kit, and other essential parts of the Java phenomenon.

Since reference to the Java Programming Language book (Arnold & Gosling, 1998) will be used frequently throughout this paper, we refer to it as JPL.

Leverage and programming language paradigms

Computers are faster than humans, and the purpose of programming is to define the behaviour of computers as effectively as possible so they operate as intended under future circumstances that have not been predicted in complete detail. In an accounting program, we might write balance=income-expenditure, intending this to enable a computer to calculate an account's balance without further human intervention. The program fragment is intended to specify the general behaviour of the computer, that it should subtract two unspecified numbers, whose actual values will be provided in the future. Naturally it follows that the semantics of programming languages must be well-defined, otherwise programmers have to continually interfere with the operation of the computer to check it is behaving as intended, or at least then would have to undertake extensive checking before a program could be relied on to work without supervision. Badly defined semantics therefore lose the leverage programming languages are intended to provide.

Very few applications of computers conform nicely to the limitations of computers. Thus, if the example above was used to calculate the national debt, there may be problems with the representation of large numbers. Although being explicit about such limitations may make the behaviour of a program better defined, being explicit can be cumbersome and make the program harder to write and read -- it also forces the programmer into considering exceptional behaviour, which may itself be a distraction from considering more usual, and hence more salient, behaviour. What is gained in precision is lost in the human factors consequences of making programming harder to do properly.

In practice, programming languages take certain sorts of semantics as given. This defines their paradigm (Thimbleby, 1989). Since all programming languages share certain common assumptions (e.g., that numbers have finite representations) the term paradigm is usually taken to be what hidden semantics are distinctive of a particular language. Java, for instance, has an object oriented paradigm. In Java, objects can be created and manipulated without specifying the semantics of those objects, in a way that would not be possible in a language such as Pascal. In Pascal, any manipulation of objects would need to be specified by the programmer explicitly: Pascal is not object oriented.

Unfortunately, there is little agreement exactly what many paradigms should mean. Different object oriented languages provide different semantics for objects. The main problem is that programming language designers and programming language users may not agree; moreover, because a paradigm is not explicit in the programming language notation, there may be no simple way to uncover this disagreement. The differences may be very subtle. The consequences are that programs written in such languages will be unreliable.

In a sense, a programming language designer cannot be responsible for the ignorance of a programmer. Programming languages typically make explicit distinctions intended to guide the programmer in making appropriate programming decisions. Unfortunately, what may seem explicit to a designer may not be so obvious to a user. A case in point in Java is that the language distinguishes between 8, 16, 32 and 64 bit numbers, but it is possible to write programs where such a distinction is accidentally overlooked, because the distinction is not obvious enough to the programmer. The result will be a program that works correctly until circumstances offer it a number that is at an unintended precision; subsequent calculations will go awry, and the semantics of the program will be quite other than that intended.

Indeed Java makes a very inconspicuous distinction between different precisions: 71 is a 32 bit numeral, whereas 7l is a 64 bit numeral. The problem is compounded further, because Java automatically converts from one precision to another, depending on the context. It is therefore very easy to write a program which appears -- in the mind of the programmer -- to work at one precision, whereas, in fact, it is working at another. Thus, even where semantics are explicit, and there is no "technical" problem, the programming language notation may encourage certain sorts of human error. Possibly the Java designers decided this was a sensible way of designing the language (though the Java language books written by the language designers themseleves explicitly indicate otherwise), but if so, why is the way Java handles the 32/64 bit issue with integers and longs very different from the way it handles the "same" issue with 32/64 bit floating point numbers? -- as we shall show below.

There will always be human error. The programming language designer has to make a trade off. Some sorts of error can be anticipated, and made harder. Yet each barrier against accidental error can make deliberate exploitation of a programming language feature harder. For example, if Java had had required some explicit and obvious statement when precisions less than 64 bits were required, then (as many numbers are 32 bit), programmer productivity would be reduced.

It is an empirical question what sorts of errors programmers make, and with what sorts of frequencies. And there are errors that, while frequently made, are easily detected.

There is, then, a second-order issue in language design. Given that programming errors are inevitable (and program requirements from users change), how easily can a program be converted to one that has fewer errors, and better matches the (possibly revised) requirements? Green (1995) has called this property viscosity. In Java, the problem of precision is not viscous: the 32 bit numeral 71 can be made into a 64 bit numeral by appending a letter l, as in 71l. A misunderstanding about object orientation may, in contrast, be extraordinarily viscous. An entire Java program may rely on a certain understanding of object orientation, and the change from the incorrect to the correct use of the paradigm may require a complete redesign and rewrite of the program.

Programming languages are not designed in an intellectual vacuum, with complete freedom of choice for their features. Java is very similar to, and builds on, the languages C and C++. Similarities with its predecessors makes Java easier to learn, yet it may also make it harder to use. A very simple example is that casts in Java are prefix, as they are in C. Having prefix casts causes a proliferation of brackets, that would have been avoided by suffix casts (or, conversely by having prefix field and method selectors). One may have achieved a more readable and more easily written language, but it would have had an unfamiliar feel. No doubt, if Java had been too unfamiliar, it would never have achieved its current huge popularity.

Finally, there are features of Java that are essentially new, which have no familiar precursors in C or C++, or which might have provided generalisations that behaved like C/C++ except where there were obvious notational extensions. Making such features general or uniform may have had unmitigated benefits for Java programmers, but may have introduced complexities -- opportunities for human error -- on the part of Java's designers. A simple example is the design choice for array literals. Java, like C, only permits array literals in initialising expressions. Semantically, the Java designers could have been permitted array literals in any array expression, but this generalisation may have been awkward to implement correctly, or it may have interacted with some other language feature in an unforeseen way. It may simply have been that providing a language feature takes time out of the language designers' other duties, and their priorities were such that this was thought poor use of their resources. Effectively, the saving for the designers is shifted onto a cost for the programming language users: every time they want an array literal, they have to construct an initialising context for it.

Hypothetically, one can gain some insight into the priorities of language designers by the style of manuals and specifications they provide. Language features are not just considered in isolation; they must also be explained so that programmers may correctly understand them. What may be a saving in the design of a language may be paid for in difficulties or unnecessary distinctions in the specification or other explanatory material. As for arrays, Java introduces a distinction between array initialisers (only permitted in array declarations) and array creation expressions (permitted in any array expressions). That the semantics of these are so similar, namely they both mean initialised arrays, suggests that the distinction is superfluous.

Programming language design principles

Languages can have different sorts of guiding design principles, and with different consequences for users. Thus Pascal is a small, closely defined language, ideal for teaching; it has a principle of no surprising inefficiency -- for example, no assignment statements cause unusual amounts of work. This is unlike Algol68. Or Common LISP has a basic model, which is enormously extended by libraries of definitions. Once LISP's simple semantic model has been understood, learning LISP is straightforward -- there's just a lot of it (the same comment applies to Smalltalk). My experience of learning Java is that the language's "principles" are in fact just buzzwords (simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multithreaded and dynamic), and that every day I am still learning new exceptions or variations on "rules" I thought I had understood earlier. Thus, in an important way, it is more complex than it at first sight appears; it is more complex than all other programming languages I have ever learnt.

Tennent (1981) gives a good introduction to programming language principles, notably the principle of abstraction and the principle of correspondence; Schmidt (1994) is a more recent, and more technical book. The more formal language design principles (later) refer to "meaningful syntactic categories." A meaningful syntactic category is a fragment of a program that means something -- it cannot be a comment, for example -- and by being "syntactic" it is a complete phrase. Whilst perfectly meaningful, the first three characters of strings are not syntactic categories in this sense.

Clarity

Occam's Razor

Fluidity

Uniformity

Java lacks uniformity. This unfortunately ensures that programmers cannot generalise their existing knowledge as they learn the language. For example, final classes, final fields and final methods are all completely different, despite using the same keyword. There is no particular meaning for final in Java. If you understand final methods, you have no idea what final fields are. And final classes are different again.

Another aspect of uniformity is a programming language's distinction between what it supplies and what programmers are able to do. Thus, anything provided in LISP can be done by the programmer; indeed this is one of the advantages of LISP, that it is trivial to make new extensions or variations of existing features. LISP programmers often define, in LISP, new languages to solve their problems. In Pascal, in contrast some features that look like user features actually cannot be controlled by the user at all. Obvious examples here are Pascal's read and write statements. They look like procedure calls, but a programmer cannot write procedures that are anything like as general. Programmers are restricted to fixed length, fixed type formal parameters; but the built-in write statements have variable length, variable type and optional parameters. Java, like Pascal, has built-in features that could have been uniform. Java's treatment of strings, for example, has them as almost but not quite behaving like other objects. But programmers cannot even subclass strings to help define their own string classes with new methods.

Low redundancy

One of Java's explicit design goals was to "eliminate redundancy" from C and C++, which are its direct ancestors. (For example, Java's classes achieve more nicely what C's structs, unions and typedefs do.) I do not know of any HCI justification for this; given that Java is a complex language, making it simpler is desirable, but removing alternative ways of expressing programming concepts may -- or may not? -- make it harder to learn for programmers. By definition, when one is learning a language, you don't know all of it. It follows that learners would be helped by redundancy, since this would improve the chances that they have learnt a concept that can be applied to the current task. A language without redundancy would require programmers to know exactly what is required for any given task. Presumably the trade-off is that you cannot eliminate redundancy without either making the language more expressive or making it larger.

Principle of Abstraction

The principle of abstraction states that meaningful syntactic categories in a programming language can be abstracted: that is, replaced with names that stand for the body that has been abstracted. For example, most programming languages allow expressions to be replaced by functions; a function is an abstraction of an expression, and the name of the function denotes the expression. Abstractions are essential for good programming practice, and of course they are usually implemented in space-saving ways (notwithstanding "in-lining" functions so they can be more efficiently executed).

An important practical reason for supporting abstractions is that they reduce errors. The "same" code need only be written down once, and once tested is effectively tested in all places where it is used. If we look at Java through the principle of abstraction, we might note that there is, surprisingly, no way to write common catch code. If you want to handle the same exception the same way in several places, then the code has to be written out in full in each place: it cannot be abstracted.

Arrays fall foul of the principle of abstraction: an array declaration can never be abstracted.

Principle of Parameterisation

Any meaningful syntactic category in a programming language can have parameters. Many languages, however, restrict parameters to abstractions -- one obtains the advantage of parameters, but at the expense of having to introduce "temporary" names for the abstractions. Most programming languages distinguish between compile-time and run-time parameters. In Java, a method is an abstracted command that can be parameterised at run-time, but a class can be parameterised at compile-time (they extend and implement other classes or interfaces), but cannot be abstracted or parameterised at run-time.

Principle of Qualification

Any meaningful category can have local definitions. One of the changes from Java 1 to 1.1 was the introduction of inner classes: a feature that is directly suggested by the principle of qualification.

Principle of Correspondence

The principle of correspondence says that if two parts of a language semantically correspond, they should have corresponding features. Specifically, abstractions, qualifications and parameters all introduce names, and therefore their mechanisms should be the same. One of the changes from Java 1 to 1.1 was the introduction of final parameters; the language had already had final local definitions, so this was an obvious development.

A class can implement several interfaces, but until anonymous class were introduced in Java 1.1, it was difficult to have a method parameter that accepted any of several interfaces.

Typing in Java

Java is dynamically typed. The type correctness of a program is not known at compile time. The following Java example demonstrates this; it involves creating an Object and casting it to a Character. This is statically correct (i.e., it compiles without error), but at run time, throws an error (a ClassCastException) because Objects are not Characters.

Character c = (Character) new Object();

// causes runtime java.lang.ClassCastException

Although this is a simple example, in practice type errors very readily occur when objects are stored in structures such as vectors and hash tables, and are later incorrectly retrieved as objects of different type. The language exacerbates the problem, because there is no way to define data structures for particular types, and which would therefore check that insertion and retrieval of objects was type-correct. A cautious programmer therefore has to define vectors for characters, vectors for integers, and so on -- handling each type as a completely new definition. Providing multiple definitions of essentially the same class encourages unnecessary error and increases the work required for checking and testing.

Although Java is technically strongly typed (all type errors are detected), in practice, programmers over-use the Object type, for example by defining vectors of objects rather than of more specific element types, and therefore lose advantages of strong typing. Type errors may be detected when the occur, but they are not avoided, and certainly they are not reliably detected at compile time. The next section gives an example where correct programming requires the use of Object where using a more specific type inevitably gets incorrect results!

Object orientation in Java

Ducks make a good example of Java inheritance. We define a duck to have two feet. Ducks calculate how many legs they have by counting their feet. (In a realistic application, methods will do something more sophisticated; but maybe just returning the number of feet is enough for a duck!)

class Duck

{ int feet = 2;

public int legs() { return feet; }

}

A lame duck is a duck with only one foot. We can define lame ducks as a subclass of ducks:

class LameDuck extends duck

{ int feet = 1;

public int legs() { return feet; }

}

If we ask a lame duck, it has 1 foot, and 1 leg, as we would expect. Now, since lame ducks are ducks, we can assign a lame duck object to a duck variable:

Duck d = new LameDuck();

If we ask a duck that is a lame duck, we find it has 2 feet, but only 1 leg. The lame duck's own method legs() is called, but it accesses the lame duck's field, which is 1.

Since all duck are the same, and all lame ducks are the same, we may as well make the feet and legs static.

Now we find lame ducks have 2 static legs and 2 static feet. In other words, static fields over ride differently: they are obtained from the object's declared class, not from the reference's class!

This seems curious. It means that a method cannot guarantee what fields it accesses, because it depends on how they are declared -- as class or object variables. This is true, even if the field is private -- which only means the field cannot be accessed outside the class, rather than that this class cannot accidentally access other same-named fields. On the other hand, if we made duck's feet private, then it would be a syntax error to refer to them outside of the class, and lame duck's legs() would get to see its own feet.

It's very easy to confuse the different behaviour of fields and methods. This is a point made in JPL:

"You've aleady seen that method overriding enables you to extend existing code by reusing it with objects of expanded, specialized functionality not forseen by the inventor of the original code. But where fields are concerned, one is hard pressed to think of cases where hiding them is a useful feature." p69

"Hiding fields is allowed in Java because implementors of existing super-classes must be free to add new public or protected fields without breaking subclasses." p70

"Purists might well argue that classes should only have private data, but Java lets you decide on your style." p70

Careful Java programmers -- or purists -- will therefore define all fields to be private, and will provide accessor functions if the field's values are needed outside of the class body. Unfortunately, this safer programming has efficiency implications, which is probaby the real reason Java is designed the way it is. (On the other hand, if all fields are private, then introducing a new field could not break a subclass, as the designers of Java evidently worry.)

This isn't the end of the story. If we use static methods and fields, we get quite different results. All ducks have 2 static legs and 2 static feet. If we directly ask a lame duck, it has 1 static foot, and 1 static leg, as you'd expect. However, if we now ask a duck that is a lame duck, we get a new result. All such lame ducks have 2 static legs and 2 static feet. The static legs() method in lame ducks does not override the static method in ducks.

We'll now make lame ducks be "politically correct" so they consider themselves the equal of any duck, lame or otherwise. Lameducks are given the method to achieve this:

public boolean equals(Duck d) { return true; }

And we find that a duck thinks does not think it is the equal of a lame duck, and that a lame duck thinks it is the equal of a duck, as we wanted. But if we ask a duck that is a lame duck, it thinks it is not the equal of a duck! The problem here is that the equals method for lame ducks does not override the equals method for duck, because ducks inherit from Object, whose equals method takes an Object parameter -- we had defined lame duck's equals to take a Duck parameter. A correct implementation of equals for lame ducks is:

public boolean equals(Object d)

{ if( d instanceof Duck ) return true;

else return super.equals(d);

}

Notice that we cannot define an equals method where its incorrect use would be a syntax error. Since Object's equals takes Object parameters, we cannot get the compiler to detect occasions where we try and compare a duck with a Vector (say). Instead, this programming error has to be treated as a run time error. Arguably, the correct equals implementation should throw IllegalArgumentException rather than call its superclass equals.

Notational issues

We put the following observations in alphabetical order. We do not distinguish between, say, "problems you can live with" (such as multidimensional array subscripting) and "problems that are continual risks" (such as compromised static typing).

We do not discuss some rather complex and worrying aspects of Java (such as initialisation) because it would be too complex to do so (and we probably wouldn't get it right)!

Arrays

Arrays have several purely syntactic problems. An array can be initialised by an array initialiser:

int a[] = {1,2,3};

which assigns the array denoted by {1,2,3} to the variable a. Although this looks like an assignment statement (compare it with int b; b = 2, which is equivalent to int b = 2), Java does not permit array literals in any context other than initialisation. It is not possible to pass an array literal as a parameter or to use it in any other expression.

Arrays are objects, but they are not in the class hierarchy (e.g., they cannot be subclassed). Two restrictions follow directly from this. First, array initialisers cannot be used to initialise Objects (a dummy variable has to be declared), and array types (as with basic types) cannot be named.

Under certain circumstances it is not at all easy to create an object initialising it with an array. Consider a class with constructors having array parameters. A subclass of this has a constructor. How can it call its superclass constructor with an array parameter? Not easily, since the compiler automatically calls super() if a constructor does not start with an explicit constructor. Unfortunately, an initialisation line int a[] = {1,2,3}, is evaluated after the superclass constructor is automatically invoked. To solve this problem, the superclass needs to provide a method (init, say) that can be invoked to initialise the object after it has been constructed. This sort of complexity could have been avoided by making array literals acceptable as parameters.

Arrays can be created by new expressions (that are allowed anywhere):

int a[] = new int[5];

The general syntax for the new expression is type dimensions. In general it seems new x[n] creates n elements of type x. Now, to declare a two dimensional array, one writes new int[4][], for example which creates four int[] arrays. Yet, new (int[])[4], which should mean the same thing is not permitted.

Multidimensional arrays are subscripted by each dimension in their own brackets, e.g., a[i][j]. It would have been conventional, easier to read, and would have saved a little typing, if commas could have been used, as in a[i,j].

Using square brackets for array subscription has its own problems, since no methods can be declared to do array subscription using the same syntax. Thus, a[i] = 5 has no equivalent method invocation. Instead, a programmer has to write something like a.setElementAt(5, i). Like C++, Java could have had programmer-definable methods for [].

Casts

Casts in Java are prefix (as in C). If they were postfix, fewer brackets would be needed.

For example

((AClass) a.elementAt(n)).action()

could have been more conveniently written, without having to balance nested brackets across arbitrarily long distances of code:

a.elementAt(n)(AClass).action()

Perhaps this still seems to have excess brackets! But whether they are prefix or postfix, because Java allows variables and classes to have the same names, casts have to be syntactically distinguished from field and method names.

Catch expressions

Catch works a bit like a method invocation. When an exception is thrown, a catch "method" is invoked whose parameter matches the exception. Except that catch clauses are chosen by a sequential process, whereas methods are chosen by an order-independent algorithm.

Comments

It is not easy to comment out code. First, if the code already contains comments, then as comments do not nest, one has to find and delete any */ in the inside code. Secondly, it is not possible to comment out structured code by making it conditional on if(false), since Java then has a compile time error, reporting the code is unreachable. Of course, that is a reasonable error message, since most of the time one does not want to write code that is thought to be executed but in fact can never be due to an error. Nevertheless, there is no way in Java to disable a fragment of code, structured or unstructured, without doing some tedious hence error-prone editing.

Of course, this problem (if you agree it is one) can be solved by sufficiently powerful program editors. But why shift complexity from the language to its editors? What justification is there for this, other than backwards compatibility with the equally silly way C++ works?

Compound statements

Statement syntax "taken over" from C allows compound statements wherever simple statements are permitted. Thus, the conditional statement in an if can either be a simple statement, or a compound statement grouped by curly brackets. Java introduces three new structures (throw, catch, finally) but their statements must be compound. It is not permitted to write try a = b/c; instead one has to write try { a = b/c; }

Concatenation

The operator + is overloaded, and means either addition on numbers, or concatenation of StringBuffer objects. Thus 1+2+"x" has the value "3x", but "x"+1+2 has the value "x12".

Concatenation uniquely invokes the method toString automatically: in all other contexts, method invocation have to be explicit (except for super constructors during initialisation). Thus "x"+o is equivalent to "x"+o.toString() if o is not a String or a basic type. Even passing a non-String actual parameter to a String formal parameter does not do this, so +'s operands are here doing something more powerful than parameters in any other context. Moreover, the programmer cannot define methods other than toString with this privilege. (Examples such as toInt() come to mind.)

What Java does is complex. In fact, concatenation is defined for StringBuffers, not for Strings. Suppose we write "x"+o. This is effectively converted to new StringBuffer("x").append(o).toString(), with one of StringBuffer's ten append methods being selected appropriately to match the type of o.

StringBuffer provides sufficient definitions of append to cater for anything that is post-concatenated, but what an expression like 1+2+"x" means is not defined; it probably means:

StringBuffer(new Integer(1+2).toString()).append("x").toString()

-- because StringBuffer does not have constructors for basic types such as ints, we ust suppose that the integer expression has to be converted to an Integer object, and that object asked to return its String equivalent. (There is a String constructor for StringBuffer.)

Neither String nor StringBuffer have constructors for basic Java types or for objects (with a few exceptions, such as for Strings themselves), so a programmer will find ""+3 (say) a convenient idiom for constructing a String initialised to new Integer(3).toString().

String concatenation is certainly very useful, but is it useful enough to require such peculiar rules and exceptions, particularly when it uses a common overloaded operator, +? A simple alternative would have been to choose a different operator for concatenation, such as $, or even space.

See also comments on Overloading, below.

Constructors

Class constructors are written specially, and always construct a new object. In consequence, constructors cannot be defined in terms of each other.

Suppose the constructor Obj(true) is used often; we cannot define a constructor Obj() that calls Obj(true), since to do so would create two objects. Instead, the two constructors have to share a common initialisation routine:

class Obj

{ Obj()

{ init(true);

}

Obj(boolean arg)

{ init(arg);

}

private void init(boolean arg)

{ ...

}

}

In general, in a class with many constructors, the programmer is reduced to using an arbitrary naming convention to manage the constructor's auxiliary routines. Java does not separate constructing an object from initialising it.

If not done explicitly, a constructor is made to start with an implicit call to its superclass constructor. In particular, no code can be placed before a superclass constructor. This seems a rather severe restriction, because we are required to initialise the superclass as well as to create it. The following two constructors could mean the same thing, but the second is invalid. It appears that expressiveness has been limited for the sake of a simple rule, that superclass constructors must come first.

class Obj2 extends Obj

{ Obj2(boolean arg) // ok

{ super(arg? false: true);

}

Obj2(boolean arg) // invalid

{ if( arg )

super(false);

else

super(true);

}

}

In fact, Java allows arbitrary computation before invoking the superclass constructor (provided the superclass constructor takes at least one parameter, and, of course, provided that no object instance variables are referred to). The following is an example:

class Obj2 extends Obj

{ Obj2(boolean arg)

{ // nothing allowed here

super(doAnything(arg));

}

static boolean doAnything(boolean arg)

{ // anything allowed here

return ...;

}

}

It is reasonable that the superclass object must be created first, but it is not obvious that it also needs initialising before proceeding. Java's rules on default values for variables are complex, and this is probably why it is preferable to require creation and initialisation to be combined in the single operation of construction.

Editorless

Java is designed as a conventional textual language. Yet hardly any modern programming environment is purely textual: so Java does not benefit in any way from modern automatic editing features.

Enumerated types

Java does not have enumerated types, which seems an oversight since C does.

In C it is possible to write enum {a,b,c,d} x; and be certain that the values a, b, c, and d are distinct. In Java, the closest one can get is final int a = 1, b = 2, c = 3, d = 4. One could easily mistype and have a=c or introduce other errors.

It is possible to define enumerated types, sort-of. The following defines an enumerated type, Enum:

class Enum

{ public static final Enum

a = new Enum(),

b = new Enum();

private Enum() {}

}

An enum variable can have the value Enum.a or Enum.b. There are no other Enum values, and Enum.a.equals(Enum.a) and not Enum.b. They therefore work a bit better than C's enum {a,b} could do!

Yet there is no way to generalise this. We can't have enumerated gender values male and female without writing a completely new class. Although one might argue that enumerated types are "bad programming style," the Java libraries use constants (final ints) a great deal -- risking the disadvantages mentioned above, as well as the problem of programmers accidentally passing the wrong sort of constant. The following is an example, from the Java libraries, that is syntactically valid but incorrect.

new Event(target, when,

0, 0, Event.F1, Event.SHIFT_MASK, Event.GOT_FOCUS);

The parameters (after the first two) are all integers, and this example has put them in the wrong order. Proper enumerated types, so that event ids, key names and modifier values were different types would have made the example a compile-time error, rather than an obscure run-time error.

Exceptions

Java has two sorts of exception, those that are deemed to be so common that it would be a burden on the programmer to declare them, and others.

Consider Enumerations: the method nextElement() should throw a NoSuchElementException if it is called when there is no next element in the enumeration object. Unfortunately, NoSuchElementException is a subclass of RuntimeException, and therefore the compiler does not check it. In particular, it does not check that the method throws the exception -- even though the interface definition seems to require it. Interestingly, code give in the Java Programming Language Book makes just this mistake (pp222-223): their method returns null rather than throwing an exception.

A second problem with exceptions, which hinders their effective use in programs, is that every method that does not handle a compile-time checked exception must declare that it throws the exception. Hence, if a method is strengthened to throw an exception, every piece of code that uses it must be rewritten; sometimes, of course, this is not possible (e.g., because the code is proprietary). A weaker requirement, which would meet the same safety requirement, is merely to require that the exception is caught somewhere -- this would not require the rewriting of all classes using the strengthened method.

Final variables and ROMable code

Java is partly designed for embedded systems, such as mobile phones. Much of an embedded system will be in read only memory. Java provides no features for using read only memory. A final variable, for example, may have its value calculated at run time. It cannot reside in ROM.

The keyword final itself has several meanings. Applied to a class, it means it cannot be subclassed (and, incidentally, that objects of that class can be compiled more efficiently since there is no need for run time type checks). Applied to a method, it means the method cannot be overridden. Applied to a field it means it has a constant value but can be hidden. Applied to a parameter, it means that it has a constant value. For both fields and parameters, final does not stop programs from assigning to object fields, only changing what the object refers to.

The following is permitted, clearly demonstrating that the final field o is far from constant.

class Thing

{ int var;

static final thing o = new thing();

static

{ o.var = 3;

o.var = 4;

}

}

A final class is different from a class with final methods and fields. Final fields are constants, but they can be hidden in a subclass. A final class does not have subclasses, so its fields cannot be hidden. Whereas a method can be declared as final (so it cannot be overridden) there is no notation to so define a field. Thus, Java requires String to be a well-defined class since the compiler relies on it. Therefore it is a final class. But that means no programmer can subclass String to define their own extensions of it. Surely it would have been preferable to allow programmers to subclass String, but use "final" to restrict what they can hide or override? Unfortunately, not -- at least as Java is defined -- since a String must have a private field, and that could in principle be hidden by a subclass.

Initialisation

Java has many sorts of variable. Local variables are defined within methods, and the Java compiler ensures that they are initialised before being used. The following would be a compile time error:

int i;

if( test ) i = 4;

System.out.println(i);

However, if "else i = 7;" is added, so whatever the test value, i is initialised, the initialise-before use requirement is met, and the program fragment would be correct. There is no such check that class variables (fields) are initialised before use, and therefore Java has rules for their default initialisation. A class int variable is initialised to 0, a class object variable to null, and so on.

Arrays variables are initialised the same way, whether they are method or class variables. If the array has element type t, then the default initialisation for the array elements is the same as the default initialisation for class variables of type t. An array of ints, therefore, would have its elements initialised to zero. The reason for arrays being different is that analysis to check initialise-before-use is too difficult and would be too restrictive (for example, under what circumstances could a partially initialised array be passed as a parameter to another method?).

Finally, there are variables that are formal parameters (to methods, constructors and catch expressions). Necessarily a method, constructor or catch is only invoked when all its actual parameters are provided: therefore formal parameters are always initialised -- though, of course, an object formal parameter may be initialised to null.

Arguably the rules for initialisation of non-array local variables, which the Java compiler checks, provide a useful safety feature. Alternatively, one might argue that the different sorts of initialisation add potential for confusion. A programmer might accidentally rely on a class variable or an array element being correctly initialised. And since Java has no 'undefined' value, except null for objects, any such incorrect assumption is unlikely to be easily discovered (except where the variable is an object). The Java Programming Language book says

"Variables are undefined prior to initialization. Should you try to use variables before assigning a value, the Java compiler will refuse to compile your program until you fix the problem."

This is not correct. Is the lapse excusable because this is an elementary comment on page 4, or does it reflect a deeper confusion in the language, that even its designers forgot to clarify properly?

Left expressions

Expressions can be placed to the left and right of assignments. Although C (and C++) allow a choice of target for assignments, the following is illegal Java: (test? i: j) = 4;

Instead it has to be written out in full using if, and the right hand side of the assignment has to be repeated (or previously assigned to a temporary variable). Either way increases the risk of making typos.

It is not possible to write a method as a left expression. This choice increases the viscosity of Java. For example we can write

a[i] = 4

but if we decide to change a from an array to a Vector, then each use of a[i] has to be examined before it can be changed. a[i] on the right of an assignment has to be converted to a.elementAt(i), whereas on the left of an assignment, say a[i] = 4, it has to be converted to a.setElementAt(new Integer(4), i).

Long and double literals

Java can catch programmers out by little details. These are traps that may easily have been avoided by legibility considerations. Here is a sample Java program illustrating the potential confusion with long literals:

public class LongTest

{ public static void main(String args[])

{ long a = 22222222222222221,

b = 2222222222222222l;

System.out.println(a+" "+b);

// prints 1303176077 2222222222222222

}

}

There are several problems illustrated here. First, the Java compiler reads the first number as a long and does not complain about overflow. Therefore the value assigned to a is just the integer part of the long number. Why isn't this loss of precision an error? Secondly, the assignment to b assigns a long, not an integer. The difference is entirely in the letter l or digit 1 at the end of the number. This difference is not very obvious, and it is obviously not very obvious -- even JPL says "l (lowercase L) can easily be confused with 1 (the digit one)" (p108).

In addition to the l/1 confusion, integers and longs work in a counter-intuitive way: the programmer has to write something extra not to lose precision. Normally, one would expect the language to be designed so that compilers would complain when precision is lost by default.

This is confusing for programmers; and compilers (i.e., compiler writers) find it confusing too. For example, the Metrowerks 2.0.1 compiler asserts that 2222222222 is a (compile time) numeric overflow, but does not recognise the overflow in 22222222222 (which is an order of magnitude bigger).

Floats and double precision floats form a 'pair' analogous to integers and longs. However, Java treats float and double literals the other way: by default a literal such as 2.3 is double precision, and cannot be assigned to a float without a compile-time error or an explicit cast.

We can write 2.3f and (float) 2.3 and they are equivalent, because the literal is a double precision number, and the (float) cast explicitly removes precision. In contrast, with whole number literals, 22222222222 is an integer (32 bits) regardless of whether it is cast to long or not!

Overloading

According to the Java White Paper:

"There are no means provided by which programmers can overload the standard arithmetic operators. Once again, the effects of operator overloading can be just as easily achieved by declaring a class, appropriate instance variables, and appropriate methods to manipulate those variables. Eliminating operator overloading leads to great simplification of code."

But this isn't quite true, because Java makes a syntactical distinction between methods and operators. Methods can be overloaded, and since programmers can only define methods, sure, they can overload them at will. But Java provides a number of syntactic operators (+, [], ., and so on) which cannot be overloaded at all. As we said elsewhere in this article, that lack (with arrays) is tedious, and when Java itself overloads + for both numbers and strings, programmers feel handicapped.

Packages

Packages are a hierarchical naming convention that is independent of the Java text within files. Any file can specify that it is part of a package. All classes defined within a package are known throughout the package, but some packages can be declared public, so that they can be accessed outside the package.

The package is a coarse concept. Apart from nested classes (which are accessible everywhere in a package by using a qualified name), there is no way of hiding auxiliary classes within files or parts of packages. If there are, say, two classes within a package that share a common auxiliary class, that class has to be accessible throughout the package. An alternative language choice might have been to permit the access qualifier private to be applied to classes, and for this to mean that the class name was hidden outside the file in which it is declared.

Packages are typically associated with a filestore hierarchy, in a way that is not specified by the Java standard. The result is that different compilers treat Java files differently, in particular files containing more than one class. Therefore Java is not source compatible across platforms or across even compilers on the same platform.

Parameterised classes

Although arrays are objects and there can be arrays of any type, it is not possible to define parametrised classes of any other sort. For example, Vector is a flexible array (defined entirely in Java) but elements of Vector must be Object, and they cannot be any more specific. To use a Vector, then, means that almost all advantages of static typing are lost.

An array of ints cannot have a double assigned to it without a compile time error. A Vector, in contrast, can have any Object assigned to elements of it -- which means anything can be assigned to it, since all classes are subclasses of Object. No assignment to a Vector is a type error. A careful programmer will therefore define classes intVector, doubleVector and so on. But each of these classes has to be written from scratch risking introducing unnecessary errors, yet they differ only in the words int and double! Defining classes that are safer is more verbose; and if generic classes are used (such as Vector itself) then each use of an object has no compile time check and requires verbose casts, risking run time errors.

Protected and default access

Access modifiers include public, private, protected and default. The rules for protected and default are very obscure. Yet the available access modifiers do not include one for restricting access to the current class and its subclasses only.

Strings

Strings and StringBuffers are fundamental to many programs, yet the Java classes that implement them are incomplete and final. (That is, they cannot be subclassed, so their incompleteness cannot be rectified.) For example, StringBuffer.equals may not do what the programmer wants -- two StringBuffers may both represent the abc, but nevertheless be unequal because they have different capacities -- and it is not possible to override this meaning of equals. It is not possible to add a method deleteChar without creating a whole new, independent, class. So much for inheritance, when some of the most useful classes cannot be inherited from!

Switch statements

Switch statements are like C's. They only switch integers, and all case expressions must be constant expressions. Constant expressions are more restrictive than final constants: they must have a constant value that can be evaluated at compile time. Yet Java could have permitted more general switches: in the context of switch(e), a case statement case f could be executed when f.equals(e). If all expressions are integers, then obviously Java could optimise the switch in interesting ways; it could, too, if all the expressions were Strings, and indeed many other types. If there were a mixture of constants and variables, the compiler could do quite useful optimisations that would be beyond most programmers and much easier to maintain.

The generalised switch would be more in line with the more powerful and clearer guarded command syntax. This is an example of how Java could have been a superset of C/C++, and therefore retained its compatibility for programmers but introduced new features that would not break existing habits.

One peculiar consequence of the rules for switch is illustrated by

final int i = 1, j = o.hashCode();

where, although both variables are final, i is a constant, and j is not; i can be used in a case statement, j cannot. Apart from switch statements, i and j can be used interchangeably. In other words, the design of switch creates a unique and confusing distinction. Moreover, it's a distinction that isn't necessary.

Variable names

For the sake of reliability, a programming language might forbid variable names to consist only of the letters I, S, O and digits, because those letters are too easily confused with digits. This is justifiable -- many programs have failed because of confusion between 0 and O -- but it introduces a fairly arbitrary rule. One assumes that programmers should not make that sort of mistake! In fact, Java variables use Unicode, which is a vast character set with many symbols: it would be impractical to have such rules -- and would instil a false sense of security. Curiously, the Java language specification recommends using l (easily confused with 1) as the conventional single letter variable of long type!

Java variables cannot be any of the Java "reserved" words (like goto, true and class), and must start with a letter and be followed by letters or digits. A Java letter can be from almost any language, so A (Latin alphabet) and A (Greek alphabet) are different! For historical reasons, Java considers underline and dollar sign as letters.

Many identifiers are naturally composed of several words, such as in the keyword instanceof. The language specification recommends certain conventions: for constant names, words should be upper-case and separated by underline (as in MIN_VALUE). Other names should start with an upper-case letter if a class name, and subsequent words should have initial capitals -- ClassLoader (a class) and toString (a method) being examples. Unfortunately, Java has exceptions (such as marklimit, defined in the class BufferedInputStream).

Conclusions

I believe that good programming requires using a language with principles. I believe that the way to understand a language is a good indicator of how well it is designed; ideally, one should be able to learn incrementally, building constructively on past learning. Simple things should be easy, complex things should not conflict with simple things. With Java, one is always having to revise one's "knowledge" of it as more is learnt. I find it hard to imagine a book on Java being written without the word "but" or "except." The problem in a complex language like Java is that so much is unsaid. Sometimes this results in clearer and more compact programs -- they don't need to mention garbage collection. And sometimes it leads to incredible but hidden complexity -- such as when the order of initialisation is not quite right for an application. As we mentioned below, sometimes Java compounds the problems: for some reason, the language makes special cases out of the ways in which arrays can be used in initialisation. So arrays and initialisation can get very complex.

If we regret some aspects of Java's design, then we must ensure that future designers take better account of HCI. HCI has been taught long enough to have some impact on today's designers; it is time that HCI itself started to be effective and usable.

The list of thoughts about Java, above, is not complete: it merely gives some indication of the range of design choices in a large language like Java. The Java designers had many criteria to balance, and they chose differently. Whether their choices were optimal is hard to say: Java is successful, and "improving" it in an objective sense would be to forget the vast investment programmers have had in learning Java as it is. The conclusion, then, is not that Java should be changed, but that when designing a system, certainly one intended for a world wide market, one should take great care to explore the HCI issues. Every minute each Java programmer wastes over an unnecessarily obscure feature is equivalent to the waste of a year of human life; every hour, then, is a full life wasted.

Java 1 was quickly replaced by Java 1.1. All the revisions it includes represent lost time to a huge number of programmers who must now learn and relearn the extensions and variations. The time lost is surely much more than an hour per programmer, and surely a responsibility that HCI practitioners must bear. For HCI academics and trainers, the question is, "How come all our HCI courses and books have had such little impact when it mattered?"

Many Java programmers I have spoken to believe Java is a great success. Yet when I have seen their programs, they are usually written in a Java-like subset of C. They surely gain by not having the risks of pointers and unchecked array subscripting, but why not use Pascal or another simpler language with much the same advantages? Thus, we cannot conclude it is just HCI practitioners to blame: much of the poor quality of programming (including the rough-and-ready implementations of Java and its packages) is due to unnecessary complexity, lack of technical skill, and not just a lack of awareness of HCI, which is what programming is for.

Further reading

K. Arnold & J. Gosling, The Java Programming Language, Addison-Wesley, 1996.

K. Arnold & J. Gosling, The Java Programming Language, Addison-Wesley, 2nd. ed., 1998.

J. Gosling, H. McGilton, The Java Language Environment A White Paper, Sun, 1996. See http://java.sun.com/docs/white/langenv/.

T. R. G. Green, 1995.

D. A. Schmidt, The Structure of Typed Programming Languages, MIT, 1994.

B. Stroustrup, The Design and Evolution of C++, Addison-Wesley, 1994.

R. D. Tennent, Principles of Programming Languages, Prentice Hall, 1981.

H. W. Thimbleby, "A Literate Program for File Comparison, in Literate Programming," Communications of the ACM, 32(6), pp740-755, 1989.

Acknowledgements

The author is very grateful to Ian Witten for helpful comments.

An example of nice Java

There are some very nice features of Java that do make it an attractive language. We very briefly mention just one: being able to revise a program to reliably use memo objects.

Suppose a program uses objects of class m, and constructs them with m(int). We now realise that all objects initialised the same should in fact be the same objects. We therefore wish to replace, for example, new m(3) with some other way of constructing objects so that if an m(3) already exists, this will return same object, and we must do so reliably. Java's definition of new does not directly permit this. This is a good test of the language's viscosity for this sort of change.

First, we make the m constructor private. This ensures no new m objects can be created accidentally: the Java compiler will now prohibit all obsolete uses of new m(.). We then add a static method newm that returns a new m object only if necessary:

class m

{ private static Hashtable h = new Hashtable();

private m(int i) { ...as before... }

static m newm(int i)

{ m t = (m) h.get(new Integer(i));

if( t != null ) return t;

h.put(new Integer(i), t = new m(i));

return t;

}

}

The rest of the program is then globally edited, changing new m(...) into m.newm(...). Any new m calls that are missed become compile-time illegal (since we made the m constructor private).

Applets

 The Java Language Environment. A White Paper
Лента новостей


2006 (c) Copyright Hardline.ru