Miscellaneous Language Notes

Updated: 8/7/2002

Addendum to Language Options

Error Handling Notes

One approach I kicked around involves a combination of an optional status attribute and various levels of "catchers".


// no explicit error handling
handle = openDB(sql, #driver std) 
// explicit error handling
handle = openDB(sql, #driver std #errto stat) 
if stat.errcode < 0 
  message "Big booboo: " & stat.errText
end if
// (stat is a dictionary array, "#" marks 
//  named parameters.)
If there is an "errto" named parameter, then no error handling is triggered, being that it is assumed that the programmer handled it.

If there is *no* "errto" parameter, then the "chain of handlers" is called. First the routine-level handler is called (if exists), then the module-level handler is called (if exists), then the application- level handler is called (if exists). If there are no handlers supplied, then a regular application "crash" happens. (A crash can also happen if all the handlers return a "false" or empty value.)

A routine-level handler might be over-kill and complicate the language syntax. Besides, they can use "errto" if they want that fine of control.

// module-level handler
sub errHandler(stat)
  println "Routine of error: " & stat.routineName
  println "Error Number: " & stat.errCode
  println "Error Text: " & stat.errText
  return true     // true = handled
end sub
// application-level handler
sub progErr(stat)
  return true   // true = handled, false = halt
end sub
The routine names "errHandler" and "progErr" would be reserved. (The names used here are tentative.)

Another reason a routine-level handler my not be necessary is that one can do things like:

sub errHandler(stat)
  var routineName = stat.routineName
  if routineName = "report4"
  end if
  if routineName = "checkCriteria"
  end if
end sub
This might at first seem kind of silly, but may instead allow one to consolidate errors from multiple routines (in a given module usually):
sub errHandler(stat)
  select on stat.routineName
  case "report4","checkCriteria"
  case "foo"
  case "bar", "checker", "grog" 
  end select
  return x
end sub
For example, if you split a large routine into multiple smaller ones, then you only have to change the case statement list to have the new (split) routines have the same handling. This also allows one to group by different error aspects, such as the error location (routine), kind of error (disk I/O, database, GUI, etc.), or another application-specific aspect. IOW, it does not force an aspect grouping onto the developer. Other handling approaches seem to force or encourage only one type of aspect.

This approach IMO allows very flexible aspect grouping of handling, and also allows tree-level or forest-level handling as best fits the application needs. You can also mix the level of attention. For example, disk-I/O errors may be carefully handled locally, but database errors could be passed on up the ladder to a generic handler.

It is also not "intrusive" in that error handling code does not necessarily clutter up "normal" business logic with deeply nested code blocks.


Closures are kind of controversial. Their usefulness seems a bit exaggerated to me, motivated often by poor procedural programming skills in many cases. Often times the "sandwich pattern" can be factored such that it is not repeated over and over again, contradicting mass savings promised by some closure fans.

For example, rather than opening and closing a file each time file writing is needed, farm it off to a function:

  text = getFooText(....)
  writeToFile("foo.bar", text)

You don't need to keep repeating the open-close pairs all over; only once in the writeToFile routine. Similarly, you can have HTML "decorators":

  TagMe(text, "b")  // surrounds with <b>....</b> pairs.
  TagMe(text, "font", 'color="red"')
(With some internal parsing, the second could be simplified even more.)

As it stands now, closures are near the border of something that provides enough benefits to justify added language complexity. Their usage may depend on the target domain or one's particular style and programming philosophy. I might have more to say after I ponder this issue more.

Definition and Value of "Scripting"

Some definitions of "scripting" target language characteristics, and other definitions target typical usage of the language in question.

The first group often use weak or dynamic typing as the most distinguishing feature. However, some languages in this category can get rather complex, have a large learning curve, and/or target building large applications.

Some don't wish to label such languages "scripting" because they perceive scripting languages meant for writing small applications or "glue code" to tie applications together. These people also tend to not consider scripting languages "real" languages, implying they are only meant for trivial tasks.

The definition one prefers often depends on one's philosophy of programming. For example, my programming and software design philosophy tend to de-emphasize programming code, using relational technology and table API's instead to manage certain key areas of complexity. A lot of complex stuff can be "shifted" away from code into relational tables, tools, and techniques if you know how.

Thus, whether an application I design is small or large, the complexity of the language does not matter as much because I don't use the language for many features that others would otherwise. (There are still specific features which would be a great help for my style.)

Yet Another Car Analogy

Somebody who tends to fly on jets for vacations is going to have fewer demands or expectations from a car than somebody who likes to drive to reach vacation destinations.

Similarly, one who uses other tools or API's to handle things that others do directly via the language itself are going to have less demands or expectations from the language.

The Merit of Scripting

Just like the definition (above), the merits of scripting will greatly depend on one's philosophies of software design.

One area of strong contention is the value of strong and/or compile-time checking versus weak or dynamic typing. Debates rage for months in Internet discussion groups on this topic. It is my opinion that it is very subjective. Things that tend to cause errors or problems in one person may not in another. I personally prefer weak typing, but I will not (any more) insist that what works best for me also works for others.

It is alleged that strong or compile-time checking (SCTC) protects programmers from many forms of errors by flagging type problems before the program actually runs. The response is often that SCTC creates more "code bloat" as a consequence (due to longer declarations and conversions), and making code harder or longer to read creates errors in itself.

Further, the lack of a compile step makes testing easier and quicker, the scripting side will sometimes say.

The scriptish crowd may also argue that it is easier to adapt scripting languages to new interfaces or environments. Strong typing creates a kind of "insular" world, where you have to know or assume too much in advance, it is sometimes said.

I will point out that there are some cases where dynamic typing may actually provide more protection in certain situations. For example, suppose we have a field in a database called "customerID" (customer ID number). Suppose initially it was defined as an integer in the database. Under SCTC the code would most likely have an explicit declaration of "integer" in it to represent that field.

Now suppose a few years later our company merges with another company. The other company may have used text in their customer ID's for some reason. (I don't recommend allowing letters in such ID's, but it does happen.) They may have ID's like "SD-3212-A" for example.

To accommodate this, we can change the database column type from Integer to Text (or Character). The existing numeric ID's simply get converted to text strings.

Such a change is more likely to "break" an SCTC language-built application. It expects an integer, but will now get a string. The dynamic language probably won't care unless we did some explicit conversions, which generally are not needed for ID values. (One may want to trim leading zeros off before comparing. Perhaps a specific comparing function/operator can be built for comparing ID's. SCTC fans may want to consider an "ID" type that defines or overloads comparison operations.)

I have encountered similar situations multiple times, so it is not something to take lightly.

I do think that it is possible to capture some of the best of both worlds by having "suspicion checkers" for interpreted languages. (These are sometimes called "lint" tools.) It would look for potential problem spots before running the code.

One should be able to tell it which specific oddities or spots to ignore so that you don't have to wade through the same notices over and over. It certainly won't capture everything, but it can point out code which may need closer examination. Example:

  x = "foo"
  y = x + 3         // suspicious line
Our lint tool probably should point out the last line because we are taking a variable we defined with quotes, and then performing numerical operations on it. (I am assuming that '+' is not used for string concatenation in our sample language.) Perhaps we really want it that way. If so, then we tell it to stop complaining about that particular line so we don't see the same message the next time we run our lint utility.