Service-oriented architecture (SOA) defines an architectural style which promotes developing applications in a highly decoupled manner with a well defined service interface. The application level boundaries and technology differences are removed with the encouraged support for heterogeneity. Connecting heterogeneous applications together without jeopardizing security is
equally important. Conventional applications hard code it’s own security models - in other words - bake-in to the application it self. This doesn’t find to be the best fit in an SOA deployment.
Standards such as WS-Security, SAML, WS-Trust, WS-SecureConversation and WS-SecurityPolicy emerged over the years to define the ‘best-fit’ security model to an SOA deployment based on Web Services.
This session will cover patterns, best practices and threats associated with SOA security models
Software development is a complex field that is further complicated by software licensing. This workshop will teach you the essential skills and best practices needed to effectively manage the complexity of software licensing, particularly in environments where Free Software/Open Source licenses and proprietary licensed software needs to co-exist.
Topics to be covered will include:
* copyright basics
* an overview of Free Software and Open Source licensing
* best practices for licensing contributions from third parties
* licensing strategies for companies, projects and individuals
* best practices for managing license compliance
* explaining Free/Open licensing and Free/Open/proprietary hybrid licensing to customers
* multiple licensing models and proprietary re-licensing schemes
* how to perform a licensing audit on your code base
* best practices for integrating third party code into your code base
by David A. Wheeler
The Apache Software Foundation develops and maintains software that the world depends on. But how can it help create software that withstands later attack? How can it counter malicious developers and repository subversion? And how can it encourage and enable innovation while doing so? This talk will broadly discuss how to resist attack while enabling innovation.
Apache OODT is the first ever NASA project to be hosted at the Apache Software Foundation. After nearly a year of Incubation time, during which the OODT community translated from a set of collaborating NASA, university, and other government institutions into a set of collaborating individuals working together at the Apache Software Foundation, Apache OODT was made a top level project in November 2010.
One of the most frequent things we see in the OODT community however is the desire from our users and other individuals to know more about the overall ecosystem. As a project and community, OODT has existed for 10+ years, and spans the areas of technology, research, and academics, including numerous book chapters, journal articles and peer reviewed conference publications documenting OODT's use in different science domains and across multiple projects.
This overview talk will give attendees insight into OODT's history: its community: its projects: and its ecosystem. Topics will include:
Lucene 4.0 is the next intentionally backwards incompatible release of Apache Lucene bringing a large set of fundamental API changes, performance enhancements, new features and revised algorithm. Motivated by state-of-the-art information retrieval research Lucene 4.0 exploits an entire new low-level Codec-Layer, Automaton based inexact search, low-latency realtime-search, Column-Stride Fields and new highly-concurrent indexing capabilities. This talk will introduce Lucene's new major features, briefly explains their implementation, introduces their capabilities and several performance improvements up to 20000% compared to previous versions of Lucene.
The ApacheCon Business Track has expanded over the past nine conferences, addressing an array of key business, marketing and legal/licensing issues in Open Source. Our panel of influencers will answer your questions on The Business of Open Source, that includes customer requirements, application opportunities, deployment challenges, best practices, product development, standards compliance, business model disruptors, and more. Moderator Sally Khudairi will create a lively, interactive dialogue by inviting comments from the audience throughout the session.
PANELISTS INCLUDE Debbie Moynihan of FuseSource, Ross Turk of Talend, and Kevin Carson of Hewlett-Packard.
by Jean Frederic Clere
Browsers and web servers are standards and the need for instantaneous data exchange has grown. AJAX for example allows web clients to communicate "asynchronously" with remote web servers.
Comet is a Tomcat feature that goes beyond AJAX and allow real asynchronous unidirectional and bi-directional connections between client and server using the HTTPprotocol and Servlets.
Servlet 3.0 SPEC's are also providing asynchronous calls see what is possible do with them.
Tomcat-Native is a Tomcat sub-project that provides a non-blocking and very efficient SSL connections.
Tomcat-Native relies on the APR (Apache Portable Runtime) for Socket input/output and use OpenSSL to make the cryptographic layers.
Comparison of the performance of Tomcat, Tomcat + APR and httpd.
Servlet 3.0 is a new specification that is part of the Java EE 6 technologies. This session will introduce you to the new features of Servlet 3.0 and explain how you can leverage them in your applications. The session will focus on two major themes: ease of development, and improving application scalability. This session is intended for Java EE developers, administrators, and architects.
JSF 2.0 is a new specification that is part of the Java EE 6 technologies. This session will introduce you to the new features of JSF 2.0 and explain how you can leverage JSF in your Java EE and Portlet applications. The session will focus on three major themes: ease of development, performance improvements, and open source technology adoption. You will learn how the tools for the Application Developer make developing UI easier, as well as the JSF-Dojo component library. This session is intended for Java EE developers, administrators, and architects.
by Chris Hostetter
Apache Solr is the popular, blazing fast open source enterprise search platform from the Apache Lucene project. In this session we will see how quick and easy it can be to install and configure Solr to provide full-text searching of structured data without needing to write any custom code. We will demonstrate various built-in features such as: loading data from CSV files, tolerant parsing of user input, faceted searching, highlighting matched text in results, and retrieving search results in a variety of formats (XML, JSON, etc....) We will also look at using Solr's Administrative interface to understand how different text analysis configuration options affect our results, and why various results score the way they do against different searches.
Apache TomEE, pronounced “Tommy”, is a simple all-Apache stack aimed at Java EE 6 Web Profile certification where Tomcat is top dog. The desire to beef up Tomcat installations has persisted despite the existence of full-blown app servers, many of which include Tomcat in some truncated and stripped-down form. With today’s embedded APIs, lighter-weight components and the Java EE 6 Web Profile, that can now be fully realized. Pioneered from the Apache OpenEJB project and including Apache OpenWebBeans, Apache MyFaces, Apache ActiveMQ, Apache OpenJPA and Apache CXF, the Apache TomEE integration of these technologies is simple, to-the-point, and focused on the singular task of delivering the Java EE 6 Web Profile in a minimalist fashion. Finally you can get a little more out of your lightweight Tomcat apps without having to give up anything in the process.
by Tim O'Brien
As developers we often drive the adoption of open source technology in addition to being tasked with explaining the concepts of open source communities to management. While every company seems to be well into "leveraging open source models", there's still a great deal of confusion among management. For example, have you ever heard some C-level executive try to describe "open source"? Did it sound something like the following paragraph?
**** The past decade saw a lot of CTOs thinking "outside the box", attempting to "leverage" the collective "synergies" of open source to improve key performance indicators (KPI) all while preserving the necessary provenance and governance structures that allow the organization to really "fire on all cylinders". As they understand open source, the collaborative paradigm-shift of distributed "knowledge-workers" allows interested parties to benefit from external sources of innovation giving management the opportunity to achieve greater dominance of vertical markets without having to expend critical resources on issues irrelevant to other C-level stakeholders. In short, Open Source in the enterprise is a win-win, and everybody likes a win-win, right?
If that paragraph made you want to cry, this presentation is for you. We're going to take a look at some of the issues that arise when your boss doesn't fully understand how open source works, and we're also going to explore some ways in which you can help guide management toward a fuller, more accurate appreciation of the culture.
A humorous analysis of the various ways in which "open-source" is misapplied and misunderstood in the corporation will be presented. Along the way we'll use these misconceptions to make some strong recommendations for how not to bring Apache to your organization.
The following topics will be explored:
by Andrew Hart
One of OODT's core strengths is the loosely connected nature of its components. This architecture allows data management infrastructures to be composed by linking the core building blocks together in various ways to support a broad spectrum of data system requirements.
Three of the core OODT components, the File Manager, Workflow Manager, and Resource Manager, along with components for crawling data repositories and extracting metadata, together represent the fundamental ingredients of many large-scale data system projects and are often referred to as a Process Control System or PCS. This session will discuss a new, unified "operator" interface for monitoring and interacting with the Process Control System.
The OODT PCS Operator User Interface, or OpsUI, provides high-level and detailed information about the status of the underlying components in an intuitive, browser-based interface. It takes advantage of a suite of RESTful web services developed in OODT 0.3 that expose component level information as JSON.
This session will provide an overview of the OpsUI, discuss the services it provides, and explain how the interface can be used to monitor and manage a full-scale data system in real time.
Apache Rave is a new Incubator project commissioned to develop an open standards-based web and social mash-up engine for the Enterprise. The goal for Apache Rave is to become a leading open-source, context-aware User Experience Platform capable of delivering a wide variety of interaction channels, using open standards technologies as OpenSocial and W3C widgets and REST based services. As Rave advances, it will add features like content integration, collaboration, personalization, and rich user application features.
Apache Rave was initiated in 2010 at the previous ApacheCON in Atlanta and through subsequent discussions during the first Eurpean OpenSocial Event in Utrecht (The Netherlands), and thereafter created as a joint effort of several existing projects and individual participants. The Rave project thus provides an interesting case-study for bringing together multiple projects with already mature code bases. Rave members are also working together to reach out to other related Apache projects like Shindig and Wookie, to other potentially interested projects and developers both within and outside Apache, and most importantly to the many and diverse target communities.
During the presentation project initiators from Hippo, The MITRE Corporation and Indiana University will provide an overview of Apache Rave: the current status, its features and the goals, and talk about the process getting there.
Lucene and Solr have always provided very capable text search, but did you know it is useful for many other things as well?
In this talk, we'll take a look at some of the myriad of ways that Lucene and Solr can be used to solve real world challenges ranging from classifying content, recommending movies all the way through to taking your unit testing to the next level.
by kevan miller
Apache Geronimo 3.0 supports both Java EE 6 and Enterprise OSGi programming models. In this presentation, we will review Geronimo 3.0 features, discuss server administration, and demonstrate the development of Java EE and OSGi applications using Apache Geronimo.
by Bryan Call
When a company decides to open source software there are a lot of details to consider. There are security concerns, software license issues, patent ownership issues, trade mark ownership, existing software contracts, and possibility meeting the requirements of organizations like the Apache Foundation.
I will talk about how we handled each of these issues when open sourcing Traffic Server and why we chose to open source our code with the Apache Foundation. I will talk about the pros and cons of open sourcing and as a company what we did well and didn't do well.
Open sourcing was beneficial to the Traffic Server project. As a company it is important to know how to help a foster a community and benefit from the open source development process.
by Emily Law
Apache™ OODT (Object-Oriented Data Technology) is being used to support multiple science data systems at NASA's Jet Propulsion Laboratory (JPL). In the scientific data systems domain, one of the critical functions that data systems must provide is to enable users to efficiently discover, find, access, and readily utilize useful science content from NASA’s increasingly large volumes of multi-mission, multi-instrument science data heterogeneously distributed among NASA’s data sources at various geographic locations.
In the past, when users want to analyze or retrieve these science data, they had to leverage various custom-built tools. Using OODT that provides transparent access to distributed resources, functionality for data discovery and query optimization, as well as distributed processing and virtual archives; JPL data systems can handle data from various sources in a uniform way.
In this talk, we will describe our experience over the past several years in leveraging and deploying OODT in our science data systems. We will outline the software engineering challenges that we faced, addressed, and along the way using OODT. We will describe several large-scale deployments of OODT, and the manner in which OODT helped us to address the data discovery and access challenges. We will also relate the lessons we have learned drawing from our experience.
by Esen Sagynov
Apache Community has developed great software used my millions of users around the world, including large corporations like Google, Facebook, Yahoo!, and NHN. But what we all know is most of the times software, including Apache products, is developed with English speaking market in mind. And I can suppose that this is the reason why Google has not been able to conquer Korean search market where Naver.com (NHN's web portal) is the engine of choice and Google accounts for only 5% of the market, or Russia where Google also takes a back seat, with Yandex being the search engine of choice for all demographics. China with its monopolistic Baidu, and Yahoo being the engine of choice in Japan. Is not it too suspicious?
The thing is Google does poor analysis of texts for languages with characters other than Latin. At this FFT I will explain how NHN (Korea's leading search engine provider) managed to customize Apache Lucene to country's unique language and adopted it in its vast servers. I will also explain why Apache Lucene was chosen over Sphinx or even Solr. Most importantly I will point out the background facts of Asian market, how to better deal with this group of users, their preferences and behavior, so that the FFT attendees could learn more about this market from industry leaders and prepare themselves to create better and widely successful software products.
by kevan miller
In its 3.0 release, Apache Geronimo 3.0 has been rebased on an OSGi runtime. Geronimo’s traditional modularity is now expressed as OSGi bundles. Server assembly and application-centered deployment are now done by installing sets of bundles into an OSGi core framework. JavaEE applications are deployed by transforming them into collections of bundles. In this talk, we will review this extensive restructuring of the Geronimo runtime and discuss the new capabilities that this new framework provides.
by Cameron Goodale
The Apache OODT project is a data management system framework with over 18 different components. This design enables users to leverage only the pieces they require, but this flexibility can become an issue when new users want to use OODT. We have started developing RADiX to help users (both new and seasoned) quickly configure, build and deploy several of the core OODT components.
At it's core RADiX is an Apache Maven Archetype with some additional scripts and configuration. Our main focus is to enable users to Download, Build and Deploy a default OODT instance in 5 commands or less. In our talk we plan to explain the complexities we have encountered when using OODT, and how we believe that RADiX will provide an 80% solution that 99% of our users will find helpful.
Open source is more than just a licence, it is also a software development methodology that allows companies to share resources and collaborate on critical parts of their software/service offerings.
Open innovation means combining internal and external ideas, and internal and external paths to market, to advance a company's technology.
The parallels should be obvious, yet people don't always think as open source as an enabler for open innovation. Open source, if done right, brings many external eyeballs and fast feedback to the software development process.
We will show how those eyeballs and feedback can make a huge difference in a company's potential for innovation, and as a result provide compelling arguments for moving large parts of your software development efforts to open source, as Day Software (now part of Adobe) started doing a few years ago.
by Erik Hatcher
Show off the power of Apache Solr with state of the art user interfaces and interactions. Solr Flair demonstrates live systems leveraging Ajax suggest, “instant” search and preview, did you mean?, spell checking, faceting, filtering, grouping, and clustering. We’ll see how to generate charts, maps, and timelines from Solr indexed data. Each example will be presented with the complete code, configuration, and user interface elements.
by Andrew Hart
OODT traces its roots to planetary and Earth science data systems to support research at the NASA Jet Propulsion Laboratory. Today, as an open-source project at Apache, OODT has a thriving and diverse community that includes projects in Cancer Research, Radio Astronomy, Pediatric Care, Computer Modeling, and Visualization. This session will discuss how many of the components of OODT have been used in concert to develop end-to-end data analysis infrastructures for examining large volumes of complex medical data. Motivated by, and illustrated with, concrete examples from our experience developing an infrastructure for data-driven decision support at the Whittier Virtual Pediatric Intensive Care Unit at Childrens Hospital Los Angeles.
by Ricky Nguyen
We will describe the current status of our project, funded by a Challenge Grant from the National Library of Medicine (NLM), to develop data-driven decision support systems for treatment of critically ill children. Researchers from Children's Hospital Los Angeles and NASA's Jet Propulsion Laboratory (JPL) partnered to achieve three specific aims. One of the specific aims is to develop optimal extraction strategies for clinical data from existing electronic health care records and monitoring systems. We are leveraging the Apache Object Oriented Data Technology (OODT) framework to extract data from existing clinical databases and systems at Children's Hospital Los Angeles (CHLA) and partner pediatric intensive care units (PICUs) and to stage that data in research databases for analysis.
by Afkham Azeez
Apache Tomcat is one of most popular & widely used Application Servers, and Apache Axis2 is one of most widely used Java Web services servers. Apache Synapse is one of the popular, high performant ESBs widely used in the industry.
In this session, we will look at how we combine these great projects from the ASF into building a scalable, elastic,multi-tenanted Application Server, which allows you to deploy cloud-native webapps on the Cloud, and benefit from all the advantages that Cloud Computing brings in. We will also see how easy it is to deploy any standard webapp on the Cloud, and seamlessly integrate with the authentication, authorization & management infrastructure provided by the underlying Platform-as-a-Service (PaaS)
by Ross Gardler
Apache projects are managed by volunteers, can you really build your products, infrastructure or services on software managed by volunteers? The answer to this commonly asked question is a definite yes, in this session Ross Gardler will explain why this is the case. At the core of the argument is the fact that whilst contributors are volunteers here at the ASF they will almost certainly be paid by |someone for the time they spend working on our projects. This prompts a second common concern, can you really build your products, infrastructure or services on Apache software if you haven't got an army of staff to ensure the project is not "hijacked" by a third party. Once again, the answer to this question is a definite yes, in |this session Ross will explain why this is the case. In this session we will examine the meritocratic governance model used in Apache projects and explain how it ensures that even the smallest of organizations can become an important, even critical, part of a project team whilst also ensuring that no single organization can take control of a project by throwing resources at it.
Almost every developer has worked for a bad manager. A good manager is a pleasure to work for - a bad one can make your life a misery. In this session I will talk from the point of view of the 'misguided manager' - a manager who, with the best possible intentions, combines the worst of all management practice to make a devloper's life complete hell.
by Brian Showers
Solr is an open source, Lucene based search platform originally developed by CNET and used by the likes of Netflix, Yelp, and StubHub which has been rapidly growing in popularity and features during the last few years. Learn how Solr can be used as a Not Only SQL (NoSQL) database along the lines of Cassandra, Memcached, and Redis. NoSQL data stores are regularly described as non-relational, distributed, internet-scalable and are used at both Facebook and Digg. This presentation will quickly cover the fundamentals of NoSQL data stores, the basics of Lucene, and what Solr brings to the table. Following that we will dive into the technical details of making Solr your primary query engine on large scale web applications, thus relegating your traditional relational database to little more than a simple key store. Real solutions to problems like handling four billion requests per month will be presented. We'll talk about sizing and configuring the Solr instances to maintain rapid response times under heavy load. We'll show you how to change the schema on a live system with tens of millions of documents indexed while supporting real-time results. And finally, we'll answer your questions about ways to work around the lack of transactions in Solr and how you can do all of this in a highly available solution.
Questions to be answered:
1.Why should I use Solr to relieve load from my relational database?
2.How is Solr better than the alternative NoSQL solutions already in place?
3.How do I address the pitfalls of working with Solr in large scale applications?
4.What things would be more difficult in Solr than if I had stuck with my relational database?
5.Is Solr a complete and competitive NoSQL datastore?
6th–11th November 2011