by Al Nugent
As a 20-year provider of proprietary software for the enterprise market, Novell has built products and a culture around proprietary (or closed) software. Within the last 18 months, we have embraced open source development and Linux and have injected them into our corporate DNA. While different, the two approaches are not mutually exclusive. In fact, I would argue embracing open source as a proprietary company is more straightforward than an open source company trying to move "up the stack." In this talk I will examine the myths, challenges, and opportunities for companies attempting to understand the best of both worlds.
This paper presents DTrace, a new facility for dynamic instrumentation of production systems. DTrace features the ability to dynamically instrument both user-level and kernel-level software in a unified and absolutely safe fashion. When not explicitly enabled, DTrace has zero probe effect—the system operates exactly as if DTrace were not present at all.
by Eliot Lear
In the evolution of computers and networks, we have developed complex mechanisms to manage one, the other, or both. We organize teams based on technology or task, only to find that the tools they use converge at times and then diverge again. I'll discuss the latest convergences in the context of distributed systems management, network management, security, and voice in a world of ISPs, ASPs, Web services. It all boils down to this: why can't we manage the network just like one large UNIX box?
by Bruce Schneier
All security decisions involve trade-offs: how much security you get, and what you give up to get it. When we decide whether to walk down a dimly lit street, purchase a home burglar alarm system, or implement an airline passenger profiling system, we're making a security trade-off. Everyone makes these trade-offs all the time. It's intuitive and natural, and fundamental to being alive. But paradoxically, people are astonishingly bad at making rational decisions about these trade-offs.
by Rob Pike
The Web is too large to fit on a single machine, so it's no surprise that searching the Web requires the coordination of many machines, too. A single Google query may touch over a thousand machines before the results are returned to the user, all in a fraction of a second.
With all those machines, the opportunities for parallelism and distributed computation are offset by the likelihood of hardware failure. If one machine breaks on average every few years, a pool of a thousand machines will have machines break on a daily basis. A key part of the Google story is that by designing a system to cope with breakage, we can provide not only robustness, but also parallelizability, efficiency, and economies of scale.
by Eric Allman
The current state of spam will be reviewed, including some thoughts about the current legislative climate (and whether legislation has any chance of doing any good) and quite a bit about the various technologies that are being discussed and deployed. Although opinions will be offered, no conclusions will (or can) be drawn in an environment changing as quickly as we are seeing with email today.
27th June to 2nd July 2004