Monday, September 17, 2007

Semantic Web

Originally designed for document distribution, the Web has yet to realize its full potential for distributing data. XML has done its part. Yet every XML document requires an XML Schema -- and relating them isn't easy. Until a viable means for surfacing and linking data is established and adopted, humans will remain the Web's core categorizing agents.
Enter the Semantic Web, an effort spearheaded by Tim Berners-Lee in 1999 to extend the Web to enable machines to take this mantle. At the outset, the idea -- to transform the Web into something machines can readily analyze -- seemed hopelessly academic. Yet with significant public data sets surfacing in Semantic Web form, the once crazy notion now stands to revolutionize how enterprise IT accesses and disseminates data via the Web.
RDF (Resource Description Framework) -- the Semantic Web's standard format for data interchange -- extends the URI linking structure of the Web beyond naming the two ends of a link, allowing relationships among all manner of resources to be delineated. But the key to the Semantic Web -- and where most people's eyes glaze over -- is its use of ontologies. If specialized communities can successfully create ontologies for classifying data within their domains of expertise, the Semantic Web can knit together these ontologies, which are written using RDF Schemas, SKOS (Simple Knowledge Organization System), and OWL (Web Ontology Language), thereby facilitating machine-based discovery and distribution of data.
Buy-in is essential to the success of the Semantic Web. And if it continues to show promise, that buy-in seems likely.
--Jayant

Quantum computing and quantum cryptography

The manipulation of subatomic particles at the quantum level has raised eyebrows in computer science research departments lately -- so much so that several approaches to incorporating quantum mechanics into computing have been launched to varying degrees of success.
The most advanced field of research is quantum cryptography, a bit of a misnomer given that it doesn't rely on anything resembling traditional codes or ciphers. Instead of locking up data in a mathematical safe, the technique encodes messages in the clear by tweaking the quantum properties of photons -- a 1 may transform into a photon with "left" spin; a 0, into a photon with "right" spin.
The technique offers security because it is believed to be impossible to detect the spin of a photon without destroying or significantly altering it. So any eavesdropper would annihilate the message or change it enough for the recipient to notice. Two leaders in the field, IBM and Los Alamos National Laboratory, have built working devices and have demonstrated the transmission of photon streams through fiber optics and even the air.
Another technology based on the principles of quantum mechanics, quantum computing, attempts to model computation with quantum states. The field has produced tantalizing theoretical results that show how such a computer instantly could solve some of the most complicated problems such as factoring exceedingly large numbers.
Quantum computing is much further from having an impact in the lab or the enterprise than quantum cryptography. No one has built a particularly useful quantum computer yet, although some researchers have built machines that work with one or two bits. One group recently announced it is building machines that work with problems that take around 1,000 bits to describe.
--Jayant
A datacenter with a mind of its own -- or more accurately, a brain stem of its own that would regulate the datacenter equivalents of heart rate, body temperature, and so on. That's the wacky notion IBM proposed when it unveiled its autonomic computing initiative in 2001.
Of the initiative's four pillars, which included self-configuration, self-optimization, and self-protection, it was self-healing -- the idea that hardware or software could detect problems and fix itself -- that created the most buzz. The idea was that IBM would sprinkle autonomic-computing fairy dust on a host of products, which would then work together to reduce maintenance costs and optimize datacenter utilization without human intervention.
Ask IBM today, and it will hotly deny that autonomic computing is dead. Instead it will point to this product enhancement (DB2, WebSphere, Tivoli) or that standard (Web Services Distributed Management, IT Service Management). But look closely, and you'll note that products such as IBM's Log and Trace Analyzer have been grandfathered in. How autonomic is that?
The fact is that virtualization has stolen much of the initiative's value-prop thunder: namely, resource optimization and efficient virtual server management. True, that still involves humans. But would any enterprise really want a datacenter with reptilian rule over itself?
--Jayant

Solid-state drives

Solid-state storage devices -- both RAM-based and NAND (Not And) flash-based -- have held promise as worthwhile alternatives to conventional disk drives for some time despite the healthy dose of skepticism they inspire. By no means new, their integration into IT will only happen when the technologies fulfill their potential and go mainstream.
Volatility and cost have been the Achilles' heel of external RAM-based devices for the past decade. Most come equipped with standard DIMMs, batteries, and possibly hard drives, all connected to a SCSI bus. And the more advanced models can run without power long enough to move data residing on the RAM to the internal disks, ensuring nothing is lost. Extremely expensive, the devices promise speed advantages that, until recently, were losing ground to faster SCSI and SAS drives. Recent advances, however, suggest RAM-based storage devices may pay off eventually.
As for flash-based solid-state devices, early problems -- such as slow write speeds and a finite number of writes per sector -- persist. Advances in flash technology, though, have reduced these negatives. NAND-based devices are now being introduced in sizes that make them feasible for use in high-end laptops and, presumably, servers. Samsung's latest offerings include 32GB and 64GB SSD (solid-state disk) drives with IDE and SATA interfaces. At $1,800 for the 32GB version, they're certainly not cheap, but as volume increases, pricing will come down. These drives aren't nearly the speed demons their RAM-based counterparts are, but their read latency is significantly faster than that of standard hard drives.
The state of the solid-state art may not be ready for widespread enterprise adoption yet, but it's certainly closer than skeptics think.
-- Jayant

Superconducting computing

How about petaflops performance to keep that enterprise really humming? Superconducting circuits -- which are frictionless and therefore generate no heat -- would certainly free you from any thermal limits on clock frequencies. But who has the funds to cool these circuits with liquid helium as required? That is, of course, assuming someone comes up with the extremely complex schemes necessary to interface this circuitry with the room-temperature components of an operable computer.
Of all the technologies proposed in the past 50 years, superconducting computing stands out as psychoceramic. IBM's program, started in the late 1960s, was cancelled by the early 1980s, and the Japan Ministry of Trade and Industry's attempt to develop a superconducting mainframe was dropped in the mid-1990s. Both resulted in clock frequencies of only a few gigahertz.
Yet the dream persists in the form of the HTMT (Hybrid Technology Multi-Threaded) program, which takes advantage of superconducting rapid single-flux quantum logic and should eventually scale to about 100GHz. Its proposed NUMA (non-uniform memory access) architecture uses superconducting processors and data buffers, cryo-SRAM (static RAM) semiconductor buffers, semiconductor DRAM main memory, and optical holographic storage in its quest for petaflops performance. Its chief obstacle? A clock cycle that will be shorter than the time it takes to transmit a signal through an entire chip.
So, unless you're the National Security Agency, which has asked for $400 million to build an HTMT-based prototype, don't hold your breath waiting for superconducting's benefits. In fact, the expected long-term impact of superconducting on the enterprise remains in range of absolute zero.
-- Jayant

Thursday, September 13, 2007

Tech-Geeks

So, you needed to boot your XP Workstation or Windows 2003 server into safe mode.
You found that it's a pain to get the F8 sequence just right, or the darn thing just won't do it.
So, you go into 'msconfig', check the BOOT.INI tab and check the box labeled '/SAFEBOOT'. Click OK and reboot the box. How 'bout that, SafeMode comes up... every time you boot.
From SafeMode, you have to go back into 'msconfig' to remove the same checkbox.

What if you aren't quite able to get done in safemode what you wanted... and decide to boot to SafeMode Command line. From 'msconfig' -> BOOT.INI just check the '/NOGUIBOOT' option and reboot.

Now... once you get done with the command line, how do you get back into the server???

Enter the command line tool 'bootcfg'!!

This little goodie will let you modify boot options from the command line.

Here's the command to remove the /NOGUIBOOT from the 1st boot entry in the list.

bootcfg /rmsw /ng /id 1 Be sure to check out 'bootcfg /?' and 'bootcfg /YOUR_OPTION /?' for all the info.

Ciao, Jayant