The object of this project is nature and the “natural” understood as the non-human, including non-human sentient beings, the plant and mineral kingdoms. The main goal is to expand our understanding of the economy of emotions in the digital communication ecosystem to the field of nature and the “natural” that are part of several public discourses (e.g. ecologism, animal rights, green activism, veganism, etc.). Placing the visually grotesque (i.e. disruptive, shocking) and kitschy (i.e.
Over 90% of clinical trials for cancer disease drugs fail. It is therefore necessary to increase understanding about the factors that increase the success of drug development. In the present thesis, this issue is addressed from the perspective of Innovation Studies. To this end, 103 articles related to clinical trials, published in innovation journals (1984–2021), are revised systematically. The existing findings are summarised, the studies are classified into categories and some suggestions for potential theoretical and methodological advances in Innovation Studies are provided.
In this talk, I will explore the development of DNA sequencing as a scientific practice from the mid-1980s onwards. By combining qualitative and quantitative methods, I will show that this practice was organised in a variety of ways and that this variety both extends and qualifies the epic history that the proponents of the Human Genome Project mobilised. One of the points of divergence between our stories is that, in my investigation, the sequencing of human DNA was often connected to medical problems.
Este seminario se plantea como una reflexión desde la historia de la ciencia en torno a las distintas formas en que se puede entender la relación y la interacción entre la ciencia y la literatura, dos campos que a menudo se consideran opuestos o antagónicos.
The academic debate about university-industry engagement often centres on the strategic aspects of these interactions, particularly those related to the benefits associated with knowledge exchange and learning. In a broader sense, it is assumed that these interactions are fundamental to improving science and innovation. However, the core of the innovation system lies in the researcher, specifically in their internal motivation to engage with companies, which can determine the success of the knowledge transfer and outputs.
Artificial intelligence (AI) can make important contributions to scientific research by performing functional tasks such as reviewing prior literature, classifying digital data, or developing new drug compounds. There is less evidence, however, on the potential of AI as a mechanism to manage human workers who perform such research tasks.
Science and Technology (S&T) is a key aspect of superhero comic books. Comics reach a vast audience and are rife with scientific references. They represent a valuable resource for communicating the value of science in popular culture. The Marvel universe has evolved exponentially since its birth in 1939, breaking into the cinema industry and reaching new audiences. However, a glance at some popular Marvel characters raises some concern about the part played by S&T in superhero stories and the debatable effects of S&T on superhero characters.
In this seminar, I will present a summary of my findings regarding the structure of innovation and collaborative knowledge based on two primary datasets, Wikipedia and Scientific papers. The presentation consists of three distinct sections. 1) Understanding the oligopoly of super-editors in collective knowledge [1, 2], 2) locating the silk road of knowledge transfer (or diffusion) in the 21st century utilizing Wikipedia [3, 4], and 3) quantifying team chemistry in scientific collaboration of duos [5].
This talk will have two main parts: 1) risks in AI systems; 2) AI for online safety. In the first part, I will talk about three risks of using AI: security, privacy, and discrimination. I will show that attacks that can be performed to exploit AI models and attack the systems that use them, that AI-based systems can be privacy-intrusive, and that AI-based systems may have biases that may lead to discriminate against particular types of users (e.g. based on gender/ethnicity). I will then outline our current research and projects on making AI safer.
David Barbera-Tomás1, James Bates2 , Enrique Meseguer1 and Michael M. Hopkins2
1Ingenio (CSIC-UPV), Universitat Politècnica de València, Spain.
2Science Policy Research Unit (SPRU), University of Sussex Business School, Brighton, UK.