Sunday, February 27, 2005

Croquet 3D group collaboration environment

Croquet is a really cool 3D graphical group collaboration environment from Alan Kay's Viewpoints Research Institute. It is based on Viewpoints' Squeak rendition of the Smalltalk programming language and environment. As the Croquet web site notes:

Croquet is a combination of computer software and network architecture that supports deep collaboration and resource sharing among large numbers of users within the context of a large-scale distributed information system. Along with its ability to deliver compelling 3D visualization and simulations, the Croquet system's components are designed with a focus on enabling massively multi-user peer-to-peer collaboration and communication.

Croquet's treatment of distributed computation assumes a truly large scale distributed computing platform, consisting of heterogeneous computing devices distributed throughout a planet-scale communications network. Applications are expected to span machines and involve many users. In contrast with the more traditional architectures we grew up with, Croquet incorporates replication of computation (both objects and activity), and the idea of active shared subspaces in its basic interpreter model. More traditional distributed systems replicate data, but try very hard not to replicate computation. But, it is often easier and more efficient to send the computation to the data, rather than the other way round. Consequently, Croquet is defined so that replication of computations is just as easy as replication of data.

It is impossible to convey what the Croquet system does without first describing its various components and the ways they interrelate. In its simplest form, Croquet is a complete development and delivery platform that enables people to carry out highly collaborative work. It is an infinitely scalable architecture that can serve as a basis for delivering a scalable, persistent, and extensible interface to network-delivered resources, tools for knowledge management, and deep social presence. The resulting 3D wide-area environment makes it possible for large numbers of people to enjoy shared telepresence, shared authorship of complex spaces and their contents, and shared access to network-deliverable information resources.

Even the still 2D color screenshots are awesome, so you can imagine what the system must look like in motion. Croquet may still be experimental and not well integrated with the web and existing information environments, but it does give us a good feel for one of the directions that group collaboration will be headed.

Squeak programming system

Squeak is the name of the latest rendition of the Smalltalk programming language and interactive environment. As their web site notes:
Squeak is an open, highly-portable Smalltalk-80 implementation whose virtual machine is written entirely in Smalltalk, making it easy to debug, analyze, and change. To achieve practical performance, a translator produces an equivalent C program whose performance is comparable to commercial Smalltalks.

Other noteworthy aspects of Squeak include:
  • real-time sound and music synthesis written entirely in Smalltalk
  • extensions of BitBlt to handle color of any depth and anti-aliased image rotation and scaling
  • network access support that allows simple construction of servers and other useful facilities
  • it runs bit-identical on many platforms (Windows, Mac, Unix, and others)
  • a compact object format that typically requires only a single word of overhead per object
  • a simple yet efficient incremental garbage collector for 32-bit direct pointers
  • efficient bulk-mutation of objects
Squeak is available for free via the Internet, at this and other sites. Each release includes platform-independent support for color, sound, and network access, with complete source code. Originally developed on the Macintosh, members of its user community have since ported it to numerous other platforms including Windows 95 and NT, Windows CE (it runs on the Cassiopeia and the HP320LX), all common flavors of UNIX, Acorn RiscOS, and a bare chip (the Mitsubishi M32R/D).

It looks quite interesting, but of course it is the Smalltalk programming language.

There is a related effort to exploit Squeak for the education of children, called SqueakLand, packaging Squeak as "a media authoring tool".

There is also a related effort to support a 3-D group collaboration environment called Croquet.

Saturday, February 26, 2005

GMWIMW - Give Me What I Might Want

With all the talk about search engines, personalization, tracking, histories, etc., there is a little too much focus on trying to give the user results that their past history suggests that they would want. Maybe it's just me, but I have a different interest than merely wanting to see stuff similar or related to what I've seen in the past or what people similar to me are interested in. I'm always searching for new stuff, so what I would most like the computer to do is to "Give Me What I Might Want" or GMWIMW.

This is actually the opposite of using my past history to predict what I might be interested in. Rather than take my history and moving delta to similar topics that correlate well with my past interests (or even new results of people similar to me), I want to make a quantum leap in some unexpected direction and get results that will likely have the lowest possible correlation with my past interests (or the results selected by people similar to me).

This is what I want the computer to do. Whether this is feasible, is another matter.

Actually, I do know for sure one technique that at least offers the possibility of showing me results that I might want: randomly select an item of information that I've never seen before. Now of course that will frequently (usually) give me all sorts of uninteresting stuff that I have absolutely no interest in. That's okay. Just give me a little button so that I can signal topics that should be semi-permanently crossed off my potential interest list. I say semi-permanently, because even then, the computer might periodically query me as to whether some of those topics should really stay on my "do not show" list. It could do this by displaying closely related results (to the results I've expressed an extreme disinterest in) on the off chance that there was simply some superficial detail that discouraged me. In any case, after a short while, the computer would have quite an impressive library of topics and sub-topics that can be weeded out of even a random GMWIMW process.

I'm not suggesting that GMWIMW should be a random process, but at least there is some hope that GMWIMW could conceivably be implemented.

To me, this is a "growth-oriented" search strategy. One that seeks new paths. One that seeks new horizons. One that seeks enlightenment. One that seeks inspiration. One that seeks innovation. One that almosts makes the computer seem to have something like intuition.

On the other hand, I don't presume for one moment that my interests in GMWIMW coincide with those of the average search user.

Still, almost everyone has moments when all the traditional, methodical, and even heuristic strategies and techniques for making incremental forward progress are not getting you anywhere. Those are precisely the times when GMWIMW is the optimal search strategy.

The End-to-End Argument Principle

Deciding where to place functions in a distributed system is a very difficult proposition. One principle for guiding the placement of functions among the modules of a distributed computer system is known as the "end-to-end argument", as put forth by David P. Reed, et al in a classic paper entitled "End-to-end Arguments in System Design" and published in 1984 in the ACM Transactions in Computer Systems.

The principle suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level. Examples include bit error recovery, encryption, duplicate message suppression, recovery from system crashes, and delivery acknowledgement. Low level mechanisms to support these functions are justified only as performance enhancements.

I don't want to oversimplify the principle, but basically when you have a chain of modules or layers of software, possibly spread over multiple computer systems in a network, it can be counterproductive (and possibly cause a significant lose of productivity) for each module or layer to replicate a complete set of error detection, redundancy, and other housekeeping functions, when the optimal solution would be to optimize all those functions over the entire length of the chain (hence, "end-to-end"). In some cases, it may be easier and more efficient to compensate for errors and data losses and other anomolies at the ends of the processing chain.

I do think the end-to-end argument is very worth discussing for the design of any distributed system, but I'm not completely convinced that this argument is always valid. That said, in practice the principle is more likely to flush out "bad ideas" than to scuttle the occasional great idea. My model is what I would call the "Usually End-to-End Argument".

Friday, February 25, 2005

Service-Based Computing

There is now a new web service category called Service-Based Computing.

Allenport is one company working on a product or service in this space, calling it "Simple Computing ... Everywhere."

Independent product analyst Chris Shipley talks and writes about it, including an op-ed in the September 27, 2004 issue of Network World entitled "The dawn of service-based computing." She defines the term as follows:
Service-based computing delivers applications and data from a managed computing platform to a relatively simple end device. In doing so, it puts the onus of managing the computing environment on the service provider and liberates the end user to engage with the information. Service-based computing is the future model for nearly all computing and communications.

My Distributed Virtual Personal Computer (DVPC) concept would seem to fit in this new category, although DVPC is actually a hybrid, allowing you to have the best of both worlds, with a full-featured, high-performance local PC, but bullet-proof protection of your data and application settings on mirrored and encrypted remote servers.

Thursday, February 24, 2005

Quantum computing with real numbers

I would appreciate hearing from anybody who has any information on how quantum computers might handle real numbers. Is classic floating point (k-bit binary exponent and n-bit binary mantissa) really appropriate and how good might its performance be?

As an example, if a quantum computer was instructed to divide 1.0 by 3.0, how quickly would it get a result and to what precision? Or is it a matter of waiting until you get a precision that you find satisfactory? Or, is the operation itself virtually instantaneous as long as you keep it in a quantum state and use it in further quantum calculations, and only when you wish to extract the value into a non-quantum format do you need to do all the traditional heavy-lifting?

Sunday, February 20, 2005

Entrepreneurial Engineers blog now up and running

The blog for our Enrepreneurial Engineers web site is now up and running.

It's designed to be a gathering place for entrepreneurial engineers who are seeking to turn their great technical ideas into profitable businesses.

It's at

Saturday, February 19, 2005

Agtivity blog now up and running

In addition to our standalone web site for Software Agent Technology (, we also now have a blog dedicated to that topic. It's at

Friday, February 18, 2005

Jack Krupansky's Resume

Jack Krupansky's resume can be found at

Jack graduated from Stevens Institute of Technology in 1976, worked as a software engineer and project leader at Digital Equipment Corporation, Wang Laboratories, and Cadnetix, and has been an independent software consultant since 1986.

His main specialty is programming language technologies (compilers, tools, etc.), but he also has extensive experience with graphics and database technology. He invented the Liana C/C++-like object-oriented programming language.

Software Agent Technology

We have been doing deep background research in the emerging field of Software Agent Technology since 1996. The hype has been there for some time, but reality and the technology is far from catching up. There are many academic research conferences held each year and still we haven't even barely scratched the surface of this promising technology.

You may view the fruits of our labors to date at our portal site for Software Agent Technology (

If you would like to retain our services for more information on this technology of the future, drop us an email.

Gathering Place for Entrepreneurial Engineers

We have an affiliated web site for entrepreneurial enginners who are developing or marketing products and services with their own resources.

The original inspiration for the web site was a magazine called Midnight Engineering which was published by one Bill Gates (William E. Gates, not to be confused with Microsoft founded William H. Gates III). Bill also ran a conference called ME-Ski, later renamed ENTCON. But, Bill has moved on to other interests. Nonetheless, the spirit of his efforts lives on at our web site, as well as a similar conference called Entrepreneurial Connections (EntConnect) that is run by John Gaudio.

Our entrepreneurial engineering web site can be found at

Daily Stock Market Perspective

Jack Krupansky also publishes a daily stock market column.

It can be found at

Entrepreneurial Connections - EntConnect Conference

Jack Krupansky will be attending the Entrepreneurial Connections conference (EntConnect) out in Denver, March 17-20.

This conference is an excellent opportunity to meet Jack (and other entrepreneurs) in person.

The conference web site is at:

Profiting from Syndication

We'd be really interested to hear success stories about how people have used syndicated blogs to make real money.

The basic question is this: What was it that you did that REALLY made a big difference, such that if you hadn't done that one thing, the results would have been far less profitable?

Let's hear it!

Comment here, or email us...

Distributed Virtual Personal Computer (DVPC)

We are developing a proposal for what we call a Distributed Virtual Personal Computer (DVPC). The current proposal can be found at

The gist of the idea is that your local PC with its hard-disk is simply a cache of your data. Your real data would be distributed and mirrored on any number of network servers, suitably encrypted of course, so that you lose no data if your PC is stolen or even if the hard-drive crashes, and you can access your data from anywhere in the world, even if your PC is turned off or even no longer exists.

You would still use your PC as you do today, but modified data would be streamed up to the network servers as you work (or buffered for later streaming if you are not online).

There's a lot more to the proposal than that, but that's the essense.

Give us your feedback...

Who We Are

Base Technology is a sole proprietorship based in New York City, with Jack Krupansky as the principal, operating since 1986. Our focus is the hard-core underlying software technology for building modern information systems and applications. Our current main interest is Software Agent Technology. We do a lot of software development consulting, both for underlying technology, and applications as well.

Our main web site is located at

Our portal site for Software Agent Technology is located at

The best way to contact us is via email at