Vice President, Autonomic Computing
IBM Software Group
An on demand environment is responsive in real-time, demonstrates a variable cost structure, is focused on what's core and differentiating, and is resilient around the world, around the clock. Creation of such a computing infrastructure requires that the environment be integrated, open, virtualized and autonomic -- that is, self-managing.
Developing self-managing computing resources is not a new problem for computer scientists. For decades system components and software have been evolving to deal with the increased complexity of system control, resource sharing, and operational management. The advent of the internet and dramatically increased price performance of information technology in the last few years has led to a huge growth in the scale and complexity of computing systems. Autonomic computing is the next logical evolution of these past trends to address the increasingly complex and distributed computing environments of today.
This talk will describe autonomic computing, how it fits within IBM's on demand initiative, and how policy is an integral part of autonomic computing. It will describe two policy-driven projects developed by IBM Research. The first is a prototype system for proactively provisioning application servers dynamically in response to rapid changes in workloads. It uses policies as a way to describe controller operation in more intuitive terms (e.g. cost sensitivity vs responsiveness) which are then decomposed into technical controller configuration settings (such as damping factors, model accuracy margins, etc). The second is an autonomic system integration project that explores the use of policy, especially utility-function-based policies, in autonomic computing systems within an on demand environment. It uses policy to represent mathematical functions that express the business value of the possible behaviors of the IT components in an on demand system, and then uses those policies in conjunction with a system model to allocate resources in a way likely to maximize business value.
Mr. Ganek leads the IBM Corporate-wide initiative for autonomic computing which focuses on making computing systems more self-managing and resi˙lient, lowering the cost of ownership and removing obstacles to growth and flexibility. This role reaches across IBM, touching virtually all functions. The activity includes leadership in architecture, technology, and standards as well as business and market planning. The focus is be on increasing the competitiveness of IBM products and services with the infusion of autonomic computing capabilities, and ensuring that this work is fully linked with consistent, open architecture and standards. A major emphasis is establishing industry wide standards to enable multi-vendor solutions that enable Autonomic Computing capabilities for customers.
Prior to joining IBM Software Group, Mr. Ganek was responsible for the technical strategy and operations of IBM's Research Division, a worldwide organization focused on research leadership in areas related to information technology as well as exploratory work in science and mathematics. This entailed strategic and technology outlook, portfolio management, and Research Division processes. In addition, Mr. Ganek managed the operational services supporting the Division, including finance, information services, technical journals, and site operations such as facilities management, environmental control, and safety.
Mr. Ganek joined IBM as a software engineer in 1978 in Poughkeepsie, New York where he was involved in operating system design and development, computer addressing architecture, and parallel systems architecture and design. He was the recipient of Outstanding Innovation Awards for his work on Enterprise Systems Architecture and System/390 Parallel Sysplex Design. He subsequently held numerous management and executive positions in operating systems, software quality and manufacturing, and the development of solutions for the Telecommunications and Media industries.
Mr. Ganek received his M.S. in Computer Science from Rutgers University in 1981. He holds fifteen patents.
Claus von Riegen
Group Program Manager
SAP AG, Germany
Industry support for Web services is growing fast and numerous projects already use Web services, in particular for the integration of heterogeneous IT systems. Core Web service standards such as SOAP, WSDL, and UDDI guarantee the interoperability of applications independent of the platforms they are implemented in. Also, a number of proposed standards cover Web service feature sets such as security and reliable messaging.
What has been missing so far is a way to describe and communicate the configuration at each end a Web services interaction. For example, a Web service consumer needs to know the security token type and the reliable messaging delivery assurance a Web service accepts or offers in order to determine its suitability in a given usage scenario.
Web Services Policy (WS-Policy) is on its way to fill this very gap in that it offers a framework for describing Web service capabilities and requirements and attaching this description to a Web service endpoint or other, higher-level constructs. This talk introduces WS-Policy in terms of its core features, its applicability to certain domains, and the next steps for its standardization.
Mr. Claus von Riegen is Group Program Manager with SAP AG, Germany. His main assignment is the definition of SAP's strategy with regard to XML and Web services standards. In this role, he also represents SAP in both the OASIS UDDI Technical Committee and WS-I and is a principal author of the UDDI and WS-Policy specifications. Prior to this, he was SAP's representative with both the Open Applications Group and the ebXML Project. Furthermore, he has given lectures at several conferences about the architecture of XML standards in general and UDDI and Web services in particular.
Before focusing on XML, Mr. von Riegen worked on many projects in application development, including data and object modeling, workflow management, interface design and distributed systems management. He joined SAP in 1994 after obtaining a degree in Computer Science from the Technical University of Braunschweig, Germany.
Director, Internet Systems and Storage Laboratory
HP Laboratories, Palo-Alto, CA
The confluence of web based distributed applications, Linux based servers, and internet data centers has enabled applications unimaginable a decade ago. However, these new environments are problematic: dedicating hardware to specific applications limits flexibility, varying application demands result in poor server utilization, rising complexity escalates operational costs, and increasing server density generates energy and cooling issues. Consequently, a new conceptual model for large-scale distributed computing is required that addresses predictability, flexibility, utilization and cost.
In order to enable the vision of planetary scale services executing on a utility computing fabric a large number of research questions have been formulated. To achieve this end, we have developed a computing model where the data center itself is considered a virtual computer and is controlled by a data center operating system. Economic models are being used for brokering of resource supply to application demand. We also demonstrate the role of policy specification and verification in constraining system behavior within intended behavioral limits and verifying its actual behavior. This utility computing model has spawned new and innovative research in architecture, dynamic resource management, automation and energy management. Two 1000 node "utility data center" research platforms are being used to conduct the above research and will be accessible to university collaborators.
Rich Friedrich leads the Internet Systems and Storage Lab in HP Labs. The ISSL research team focuses on next-generation Internet computing and storage systems, and on inventing distinctive utility computing mechanisms to provide IT infrastructure on demand.
His sustained record of innovative accomplishments spans his 20-year career in HP research and product positions. He led the system performance team that optimized the first commercial PA-RISC based systems in the mid-1980s and the first multiprocessor, online transaction processing RISC systems in the late 1980s. He led the architecture and design of a large-scale, distributed measurement system for the OSF Distributed Computing Environment in the early 1990s.
More recently, he led the teams that invented WebQoS, the novel technology for providing predictable and stable performance for Internet based applications, re-architected Linux for IA-64, and provided key technologies to HP's Utility Data Center.
He has participated on many scientific program committees, published extensively, and is a co-inventor on a dozen patents. He is a graduate of Northwestern University.