Your current filters are…
by Rodrigo Laiola Guimaraes, Dick Bulterman, Pablo Cesar and Jack Jansen
In this paper we report on our efforts to define a set of document extensions to Cascading Style Sheets (CSS) that allow for structured timing and synchronization of elements within a Web page. Our work considers the scenario in which the temporal structure can be decoupled from the content of the Web page in a similar way that CSS does with the layout, colors and fonts. Based on the SMIL (Synchronized Multimedia Integration Language) temporal model we propose CSS document extensions and discuss the design and implementation of a proof of concept that realizes our contributions. As HTML5 seems to move away from technologies like Flash and XML (eXtensible Markup Language), we believe our approach provides a flexible declarative solution to specify rich media experiences that is more aligned with current Web practices.
by Daniel Bastos, Nailton Andrade and Cassio Prazeres
A evolução da computação embarcada permitiu o surgimento de dispositivos que, ao se conectarem à Internet, dão origem à Internet das Coisas. A Web das Coisas tem a proposta de disponibilizar esses dispositivos como recursos para o desenvolvimento de aplicações utilizando os protocolos e padrões da Web. A variedade de dispositivos que podem se conectar à Web das Coisas demanda esforço de implementação de serviços específicos para acesso a cada dispositivo. Nesse contexto, este trabalho propõe a configuração e publicação automáticas de dispositivos como recursos na Web das Coisas, por meio de modelos que mapeiam suas funcionalidades. Este artigo também apresenta: a descoberta dinâmica dos dispositivos ao se conectarem a uma rede local utilizando-se das técnicas do protocolo Zeroconf; a geração automática de aplicações para publicar os dispositivos na Web das Coisas
by Davi Oliveira Serrano de Andrade, Cláudio de Souza Baptista, Hugo Feitosa de Figueirêdo and George Henrique Queiroga de Abrantes
The integration of GPS in smartphones, tablets and digital cameras become increasingly present, but GPS receivers do not work well indoors. This malfunction can generate information far removed from the actual location of where the picture was taken or no information. Thus, the paper presents PG++ a location annotation in personal digital photo collection tool which allows automatic location propagation between photos. The focus of the tool is to minimize the number of photos geotagged by the user and to maximize automatically geotagged photos. Besides the proposed minimization, PG++ extracts metadata and organizes the collection into spatial clusters.
by Angela María Vargas Arcila, Sandra Baldassarri and José Luis Arciniegas Herrera
by Delmys Pozo Zulueta, Yeniset León Perdomo, Ailyn Febles Estrada, Yusleydi Fernández Del Monte, Adisleydis Rodríguez Alvarez and Yanet Brito Riverol
by Artur Kronbauer, Díferson Machado and Celso Alberto Saibel Santos
In recent years, after the great proliferation of mobile devices, the relationship between usability, context and emotions of the users is a widely discussed in studies related to user experience (UX) theme. Evaluations show that humans typically interact with computer systems in unusual ways and have different feelings about the applications. To contribute to this area of study, this paper presents a platform for the collection and analysis of data related to the user experience of mobile data. To evaluate the potential of the platform, an experiment was conducted with the participation of 68 people, for thirty days. The study results are presented and discussed throughout the paper.
by Jose Vinicius de Miranda Cardoso, Carlos Danilo Miranda Regis and Marcelo Sampaio Alencar
This paper describes an application for full-reference stereoscopic
image quality assessment. The application was developed using Mono framework
and C\# programming language. It is an independent platform and provides a
friendly Graphical User Interface (GUI). The stereoscopic image signals used in
the application are based in a two-view model. The application has objective
image quality algorithms, such as PSNR, SSIM and PW-SSIM and incorporates a
recently published technique for stereoscopic image quality assessment, called
Disparity Weighting (DW). Numerical results corresponding to the performance of
the objective measurements obtained using the proposed application are
presented. The application can be used by academia and industry, for
standardization and development of objective algorithms and evaluation of
impairments in stereoscopic image signals caused by processing techniques.
This paper presents a metadata-based framework for software architecture evaluation of quality attributes. It implements a scenario-based approach that uses dynamic analysis and code repository mining to provide an automated way to reveal degradations of scenarios on releases of web-based systems. The evaluation process has three phases: (i) dynamic analysis that collects information of scenarios in terms of measurable quality attributes; (ii) degradation analysis that processes and compares the results of the dynamic analysis in term of quality attributes for two or more existing releases of a web-based system to identify degraded scenarios considering the desired quality attributes; (iii) repository mining that looks for development issues and commits associated to code assets of the degraded scenarios. The paper also presents and discusses the obtained results of the framework instantiation for the library module of a large-scale web system.
by Francisco Jose Perales and Silvia Ramis Guarinos
by Telmo Silva
by Lucas Chigami, Michael Hernandez, Reinaldo Matushima, Adriano Adorya, Itana Stiubiener, Graça Bressan, Regina Silveira and Wilson Ruggiero
Projeções indicam que vídeo será responsável por 90% do tráfego de rede até o final da década. Este artigo apresenta um conjunto de ferramentas de monitoração que permitem levantar dados de consumo de vídeo distribuído através da Internet, tanto com relação ao grau de absorção de conhecimento do consumidor, sua satisfação com relação à qualidade do vídeo e do conteúdo e seu comportamento com relação a página web que o apresenta, como com relação ao consumo de recursos utilizados. Para que estas ferramentas fossem validadas, elas foram integradas a uma instância do sistema de vídeo sob demanda da RNP (vídeo@RNP) e as avaliamos dentro de um cenário de uso de vídeo como instrumento para o ensino/aprendizado e para divulgação cultural.
by Daniel G. Costa, Ivanovitch Silva, Luiz Affonso Guedes, Paulo Portugal and Francisco Vasques
Wireless visual sensor networks provide valuable information for many monitoring and control applications. Sometimes, a set of targets need to be monitored by deployed camera-enabled sensors. For those networks, however, some active visual sources may fail, potentially degrading the application monitoring quality when targets become uncovered. Moreover, some applications may need different perspectives of the same target. As visual sensors will be used to monitor a set of targets, a high level of monitoring redundancy may be required and an effective way to achieve that is assuring that targets are being concurrently viewed by more than one visual sensor. We propose a centralized greedy algorithm to enhance redundancy in wireless visual sensor networks when visual sensors with adjustable orientations are deployed. Moreover, as some targets may be more critical for the application, we propose a balanced configuration of the sensors' poses in order to find an optimized configuration of the deployed visual sensors. We expect that the proposed approach can improve monitoring availability or even monitoring redundancy of wireless visual sensor networks deployed for target coverage.
In the paper we analyze the scientific articles published in all previous WebMedia editions in order to provide a bird’s eye view over the Brazilian Multimedia Community and to show how the research topics addressed in the WebMedia series of events have evolved over the time. We used Social Network Analysis techniques to identify research groups, clusters and topics on papers presented in the WebMedia events over the last two decades.
by Luis Nícolas De A. Trigo and Carlos André Guimarães Ferraz
by Paulo Artur de Sousa Duarte, Francisco Anderson de Almada Gomes, Felipe Mota Barreto, Windson Viana de Carvalho and Fernando Trinta
The paper aims to describe the architecture and the main features of LoCCAMConfigurator, a tool for visual modeling of context information and contextual rules. This tool assists mobile application developers in the task of modelling context information from their applications and to define context-aware behaviors. LoCCAMConfigurator uses model-driven engineering approach for context modeling and subsequent automatic code generation. The tool uses the models created by the users in order to generate an Android project and a configured version of the context-aware middleware LoCCAM (Loosely Coupled Context Acquisition Middleware). This Android project includes the library and methods of communication between LoCCAM and the future application being developed. All the code for dealing with context gathering, filtering, detection and query is generated by the tool, then, developers can concentrate in the business part of their application.
by Toni Bibiloni
With the advent of Web 2.0 and the behavior change which it brought, there are millions of users worldwide contributing to different databases with various forms of data, such as movie ratings, for example. Moreover, the same real-world object (a song, a band or a movie) can be modeled using different ontologies or represented in different ways within the same ontology. Thus, the same film is often described by different attributes in different databases, making it difficult to perform an automatic mapping between those databases. We propose MovieMatcher, which is a heuristic that matches films across different databases using their metadata. After performing 2 experiments with the attempt to match 500 films to IMDb and Rotten Tomatoes databases, Movie- Matcher had a success rate of 97.4% and 94.1%, in contrast to an alternative, simpler approach (title exact matching), which had a success rate of 80.8% and 81.9%, respectively.
by Andre Luiz Firmino Alves, Claudio de Souza Baptista, Anderson Almeida Firmino, Maxwell Guimarães de Oliveira and Anselmo Cardoso de Paiva
The widespread of social communication media on the Web has made available a large volume of opinionated textual data stored in digital format. These media constitute a rich source for sentiment analysis and understanding of the opinions spontaneously expressed. Traditional techniques for sentiment analysis are based on POS Tagger. Considering the Portuguese language, the use of POS Tagging ends up being too costly, due to the complex grammatical structure of this language. Faced with this problem, a case study is carried out in order to compare two techniques for sentiment analysis: a SVM versus Naive-Bayes classifiers. Our study focused on tweets written in Portuguese during the 2013 FIFA Confederations Cup, although our technique could be applied to any other language. The achieved results indicated that the SVM technique surpassed the Naive-Bayes one, concerning performance issues.
by Cinthya França, Antonio Augusto Rocha and Pedro Velloso
The purpose of this work is to understand the complex reality of eBay ecommerce network, their connections and the dynamics of its users. Data were collected using a script developed in this work, and it resulted in a database of approximately 87 million transactions and 15 million different dealer users. From these data, the characterization was made estimating network metrics, like dealer users' degree distribution, that gave us key insights about the eBay negotiation network. We found that there are users who bought/sold for more than 100.000 different persons. We also found that that a user A interacted over 4.000 times with another user B in just 3 months. Those and other interesting results, such as average distance and feedbacks ratings, were obtained, analyzed and discussed in this work.
by Rodrigo Laiola Guimarães and Mateus Molinaro Motta
Learning about Web Design can be difficult and time consuming, yet students often do not learn from their errors and struggle to understand some differences between document structure, styling, scripting and temporal synchronization. In this paper we present Ambulant Sketchbook, an easy-to-use Web playground designed to enable students to understand and learn from their errors. In particular, this application simplifies the process of learning how to write and debug Web documents by exploring aspects of immediate feedback, coding assistance, direct manipulation and playback control. We have deployed and used Ambulant Sketchbook in a course of Web Design Foundations over a 2-week span. Based on the positive feedback from a group of post-secondary students, we expect the functionalities and experiences discussed in this work can yield significant insights to be considered in the design of next generation authoring tools and in the process of teaching Web Media related disciplines.
by Carlos de Castro Lozano and Miguel Angel Rodrigo Alonso
by José Miguel Ramírez Uceda, Remedios María Robles González and Carlos de Castro Lozano
by Leonardo Sabadini Piva, Andressa Bezerra Ferreira, Reinaldo Bezerra Braga and Rossana Maria de Castro Andrade
This paper presents a system for detection and warning of falls for people with special care, using real-time evaluated data from accelerometer and magnetometer sensors of mobile devices with android operating system, using algorithms to detect patterns of falls, device position and voice recognition to determine a possible fall. We performed 240 tests in a young healthy user using the Samsung Galaxy S3 I9300 device strapped to his chest in order to ensure efficiency in detecting falls.
by Javier Zambrano Ferreira, Josiane Rodrigues, David Fernandes and Marco Cristo
The amount of information available in the Internet does not allow performing manual content analysis to identify information of interest. Thus automated analyses are used to identify information of interest, and one increasingly important approach is the polarity analysis. Polarity analysis is the classification of a text document in positive, negative, and neutral, according to a certain topic. This classification of information is particularly useful in the finance domain, where news about a company can affect the performance of its stocks. Although most of the methods in financial domain consider that the whole document is associated with a particular entity, this is not always the case. In fact, it is common that authors cite several entities in a single document and these entities are cited with different polarity. Accordingly, the objective of this paper was to study strategies for polarity detection in financial documents with multiple entities. Specifically, we studied methods based on learning of multiple models, one for each observed entity, using SVM classifiers. We evaluated models based on the partition of documents into fragments according to the entities they cite. We used several heuristics to segment documents based on shallow and deep natural language processing (NLP). We found that entity-specific models created by partitioning the document collection into segments outperformed the strategy based on the use of entire documents. We also observed that more complex segmentation using anaphora resolution was not able to outperform a low-cost approach, based on simple string matching.
by Daniel Santos, Angelo Perkusich and Hyggo Almeida
The use of social networks has shown great potential for information diffusion and formation of public opinion. One key problem that has attracted researchers interest is Topic-based Influence Maximization, that refers to finding a small set of users on a social network that have the ability to influence a substantial portion of users on a given topic. The proposed solutions, however, are not suitable for large-scale social networks and must incorporate mechanisms for determining social influence among users on each topic of interest. Consequently, for these approaches, it becomes difficult or even unfeasible to deal quickly and efficiently with constant changes in the structure of social networks. This problem is particularly relevant as the topics of interest of users and the social influence they exert on each other for every topic are considered together. In this work is proposed a scalable solution, that makes use of data mining over an information propagation log, in order to directly select the initial set of influential users on a particular topic without the need to incorporate a previous step for learning users social influence with regard to that topic. As an additional benefit, the targeted seed set also offers an approximation guarantee of the optimal solution. Finally, it is presented a design of experiments over a data set containing information propagation data from a real social network. As main results, we have found some evidences that the proposed solution maintains a trade-off between scalability and accuracy.
by David Cevallos, Fernando Cevallos, Ivan Bernal and David Mejía
by Sebastián Ochoa, Andrés Pillajo, Freddy Acosta and Gonzalo Olmedo
by Roberto Willrich and Adiel Mittmann
Annotation tools have already been applied in the educational context successfully for a few years, supporting teachers and students to mark relevant parts of a content and to associate additional information with this content. This paper presents DLNotes2, an e-learning tool that supports the creation of structured and semantic (ontology-based) annotations on HTML documents.
by Edirlei Soares de Lima, Simone Diniz Junqueira Barbosa, Bruno Feijo and Antonio Furtado
KW-GPS is a system to assist users intent on enjoying Web resources related to a domain-restricted collection of stories. In this system, each story is referenced in a virtual library in terms of the following data: (1) the URLs of resources associated with the story, which include but are not limited to plot-summaries, narrative texts, and videos; and (2) keywords of different classes, which serve as a multi-aspect index mechanism. Library items also include story templates, representing narrative motifs. Furthermore, a reduced version of the tool runs the basic rank-and-show process on mobile devices, such as tablets and cell phones.
by Thiago H. Silva, Antonio A. F. Loureiro and Ana Paula G. Ferreira
Currently the use of location-based social networks are becoming quite popular. For example, Foursquare reported 50 million users in 2014. Data from this type of system can be viewed as a source of sensing, in which the sensors are users with their mobile de- vices sharing data on various aspects of the city. This source of data enables large-scale study of urban social behavior and city dy- namics. In this paper we show how we can use the signals emitted by Foursquare users to better understand the differences between the behavior of tourists and residents. We analyze tourists and res- idents in four popular cities around the world: London, New York, Rio de Janeiro and Tokyo. One of the contributions of this work is the spatio-temporal study of properties of the behavior of these two classes of users (tourists and residents). We have identified, for example, that some locations have features that are more correlated with the tourists’ behavior, and also that even in places frequented by tourists as well as residents there are clear differences in the patterns of behavior of these classes. Our results could be useful in several cases, for example, to help in the development of new recommendation systems specific for tourists.
18th–21st November 2014