Economic growth, population growth, and the drive for sustainability evidenced by the Paris Accords is forcing a radical re-examination of the way electricity is produced, managed, and consumed. Emerging research on sustainable smart electricity markets is facilitating the emergence of sustainable energy systems and a revolution in the eciency and reliability of electricity consumption, production, and distribution. Traditional electricity grids and markets are being disrupted by a range of forces, including the rise of weather-dependent and geographically distributed supply from renewable sources, consumer involvement in managing their power consumption and small-scale production, and the electrification of transport, evidenced by the emergence of electric vehicles. We expect these transformations to bring about increasingly complex and dynamic smart electricity markets that must rely on intelligent analysis of information to continuously inform stakeholders decisions, and on e ective integration of stakeholders actions. We outline how advances in information-intensive processes are fundamental for facilitating these transformations. We describe the roles that such processes will play in the future smart grid and discuss Information Systems research challenges necessary to achieve these goals. The research we discuss spans challenges in public policy, privacy, and security, market mechanisms, and data-driven decision support. Overall, research is necessary to enable information sharing across the grid as well as to develop methods that intelligently exploit rich data to transform the eciency of energy use, production, and distribution. Finally, our commentary underscores that the diverse IS research perspective is instrumental for addressing the complexity and interdisciplinary nature of this research.
Researchers from across the social and computer sciences are increasingly using machine learning to study and address global development challenges. This paper examines the burgeoning field of machine learning for the developing world (ML4D). First, we present a review of prominent literature. Next, we suggest best practices drawn from the literature for ensuring that ML4D projects are relevant to the advancement of development objectives. Finally, we discuss how developing world challenges can motivate the design of novel machine learning methodologies. This paper provides insights into systematic differences between ML4D and more traditional machine learning applications. It also discusses how technical complications of ML4D can be treated as novel research questions, how ML4D can motivate new research directions, and where machine learning can be most useful.
Software quality concerns in the banking industry are often addressed by professionals but rarely studied academically. This paper aims to get a deep insight about the most perceived concerns by industry experts. We carried out a Mixed-Method study, performing a Delphi-like study about the Italian banking IT sector. According to our pragmatic epistemological paradigm, we developed a specific research framework to pursue this vertical study, that is domain and country specific. Data collection was drawn in four phases starting with a high level randomly stratified panel of 13 senior managers and then a target-panel of 124 carefully selected and well-informed domain experts. We have identified and dealt with 28 concerns about the present situation; they were discussed in the framework inspired by the ISO/IEC 25010, 42010 and 12207 standards. Moreover, they disclose how a short-term total cost of ownership view of information systems increases the technical debt. After having mapped the concerns within the ISO/IEC standards of Software quality, architecture and process, we discussed such dimensions in relationship with the three relevant ISO standards. Our results show the strong relationship between software quality, software architecture and software process. Thus, we induce and illustrate the novel SQuAP (Software Quality, Architecture, Process) Meta-Model framework, to analyze and assess information systems' quality.
Person-job fit is the process of matching the right talent for the right job by identifying talent competencies that are required for the job. While many qualitative efforts have been made in related fields, it still lacks of quantitative ways of measuring talent competencies as well as the job's talent requirements. To this end, in this paper, we propose a novel end-to-end data-driven model based on Conventional Neural Network (CNN), namely Person-Job Fit Neural Network (PJFNN), for matching a talent qualification to the requirements of a job. To be specific, PJFNN is a bipartite neural network which can project both job postings and candidates' resumes onto a shared latent representation, thus it can effectively learn the joint representation of Person-Job fitness from the successful job applications. In particular, due to the design of a hierarchical representation structure, PJFNN can not only estimate whether a candidate fit a job, but also identify which specific requirement items in the job posting are satisfied by the candidate by measuring the distances between corresponding latent representations. Finally, we evaluate our approach based on a large-scale real-world dataset. The extensive experiments clearly validate the performances of our method in terms of Person-Job Fit prediction. Also, we provide effective data visualization to show some job and talent benchmark insights obtained by PJFNN.
Serendipity has been recognized to have the potential of enhancing unexpected information discovery. This study shows that decomposing the concept of serendipity into unexpectedness and interest is a useful way for implementing this concept. Experts domain knowledge helps providing serendipitous recommendation, which can be future improved by adaptively incorporating users real-time feedback. This research also conducts an empirical user-study to analyze the influence of serendipity in a health news delivery context. A personalized filtering system names MedSDFilter was developed, on top of which serendipitous recommendation was implemented using three approaches random, static knowledge-based, and adaptive knowledge-based models. The three different models were compared. The results indicate that the adaptive knowledge-based method has the highest ability in helping people discover unexpected and interesting contents. The insights of the research will make researchers and practitioners rethink the way in which search engines and recommender systems operate to address the challenges of unexpected and valuable information discovery. The outcome will have implications for empowering ordinary people with more chance of bumping into beneficial information.