Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-20 of 20
The Anh Han
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference82, (July 22–26, 2024) 10.1162/isal_a_00822
Abstract
View Paper
PDF
Navigating the intricacies of digital environments demands effective strategies for fostering cooperation and upholding norms. Banning and moderating the content of influential users on social media platforms is often met with shock and awe, yet clearly has profound implications for online behaviour and digital governance. Drawing inspiration from these actions, we model broadcasting retributive measures. Leveraging analytical modeling and extensive agent-based simulations, we investigate how signaling punitive actions can deter anti-social behaviour and promote cooperation in online and multi-agent systems. Our findings underscore the transformative potential of threat signaling in cultivating a culture of compliance and bolstering social welfare, even in challenging scenarios with high costs or complex networks of interaction. This research offers valuable insights into the mechanisms of promoting pro-social behaviour and ensuring behavioural compliance across diverse digital ecosystems.
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference44, (July 22–26, 2024) 10.1162/isal_a_00767
Proceedings Papers
. isal2024, ALIFE 2024: Proceedings of the 2024 Artificial Life Conference34, (July 22–26, 2024) 10.1162/isal_a_00752
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference122, (July 24–28, 2023) 10.1162/isal_a_00626
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference120, (July 24–28, 2023) 10.1162/isal_a_00571
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference103, (July 24–28, 2023) 10.1162/isal_a_00637
Abstract
View Paper
PDF
Intention recognition entails the process of becoming aware of another agent’s intention by inferring it through its actions and their effects on the environment. It allows agents to prevail when interacting with others in both cooperative and hostile environments. One of the main challenges in intention recognition is generating and collecting large amounts of data, and then being able to infer and recognise strategies. To this aim, in the context of repeated interactions, we generate diverse datasets, characterised by various noise levels and complexities. We propose an approach using different popular machine learning methods to classify strategies represented by sequences of actions in the presence of noise. Experiments have been conducted by varying the noise level and the number of generated strategies in the input data. Results show that the adopted methods are able to recognise strategies with high accuracy. Our findings and approach open up a novel research direction, consisting of combining machine learning and game theory in generating large and complex datasets and making inferences. This can allow us to explore and quantify human behaviours based on data-driven and generative models.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life7, (July 18–22, 2022) 10.1162/isal_a_00484
Abstract
View Paper
PDF
With the introduction of Artificial Intelligence (AI) and related technologies in our daily lives, fear and anxiety about their misuse, as well as the hidden biases in their creation, have led to a demand for regulation to address such issues. Yet, blindly regulating an innovation process that is not well understood may stifle this process and reduce benefits that society might gain from the generated technology, even under the best of intentions. Starting from a baseline game-theoretical model that captures the complex ecology of choices associated with a race for domain supremacy using AI technology, we show that socially unwanted outcomes may be produced when sanctioning is applied unconditionally to risk-taking, i.e., potentially unsafe behaviours. As an alternative to resolve the detrimental effect of over-regulation, we propose a voluntary commitment approach, wherein technologists have the freedom of choice between independently pursuing their course of actions or else establishing binding agreements to act safely, with sanctioning of those that do not abide to what they have pledged. Overall, our work reveals for the first time how voluntary commitments, with sanctions either by peers or by an institution, leads to socially beneficial outcomes in all scenarios that can be envisaged in the short-term race towards domain supremacy through AI technology.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life41, (July 18–22, 2022) 10.1162/isal_a_00524
Abstract
View Paper
PDF
Before embarking on a new collective venture, it is important to understand partners’ preferences and intentions and how strongly they commit to a common goal. Arranging prior commitments of future actions has been shown to be an evolutionary viable strategy in the context of social dilemmas. Previous works have focused on simple well-mixed population settings, for ease of analysis. Here, starting from a baseline model of a coordination game with asymmetric benefits for technology adoption in the well-mixed setting, we examine the impact of different population structures, including square lattice and scale-free (SF) networks, capturing typical homogeneous and heterogeneous network structures, on the dynamics of decision-making in the context of coordinating technology adoption. We show that, similarly to previous well-mixed analyses, prior commitments enhance coordination and the overall population payoff in structured populations, especially when the cost of commitment is justified against the benefit of coordination, and when the technology market is highly competitive. When commitments are absent, slightly higher levels of coordination and population welfare are obtained in SF than lattice. In the presence of commitments and when the market is very competitive, the overall population welfare is similar in both lattice and heterogeneous networks; though it is slightly lower in SF when the market competition is low, while social welfare suffers in a monopolistic setting. Overall, we observe that commitments can improve coordination and population welfare in structured populations, but in its presence, the outcome of evolutionary dynamics is, interestingly, not sensitive to changes in the network structure.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life40, (July 18–22, 2022) 10.1162/isal_a_00523
Abstract
View Paper
PDF
Indirect reciprocity (IR) is an important mechanism for promoting cooperation among self-interested agents. Simplified, it means: “you help me, therefore somebody else will help you” (in contrast to direct reciprocity: “you help me; therefore I will help you”). IR can be achieved via reputation and norms. However, it was often argued that IR only works if reputations are public and does not do so under private assessment (PriA). Yet, recent papers suggest that IR under PriA is feasible, and that it has more variety and ways to improve, than have been considered before.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life45, (July 18–22, 2022) 10.1162/isal_a_00528
Abstract
View Paper
PDF
Regulating the development of advanced technology such as Artificial Intelligence (AI) has become a principal topic, given the potential threat they pose to humanity’s long term future. First deploying such technology promises innumerable benefits, which might lead to the disregard of safety precautions or societal consequences in favour of speedy development, engendering a race narrative among firms and stakeholders due to value erosion. Building upon a previously proposed game-theoretical model describing an idealised technology race, we investigated how various structures of interaction among race participants can alter collective choices and requirements for regulatory actions. Our findings indicate that strong diversity among race participants, both in terms of connections and peer-influence, can reduce the conflicts which arise in purely homogeneous settings, thereby lessening the need for regulation.
Proceedings Papers
. isal2022, ALIFE 2022: The 2022 Conference on Artificial Life35, (July 18–22, 2022) 10.1162/isal_a_00517
Proceedings Papers
. isal2021, ALIFE 2021: The 2021 Conference on Artificial Life65, (July 18–22, 2021) 10.1162/isal_a_00385
Abstract
View Paper
PDF
We examine a social dilemma that arises with the advancement of technologies such as AI, where technologists can choose a safe (SAFE) vs risk-taking (UNSAFE) course of development. SAFE is costlier and takes more time to implement than UNSAFE, allowing UNSAFE strategists to further claim significant benefits from reaching supremacy in a certain technology. Collectively, SAFE is the preferred choice when the risk is sufficiently high, while risk-taking is preferred otherwise. Given the advantage of risk-taking behaviour in terms of cost and speed, a social dilemma arises when the risk is not high enough to make SAFE the preferred individual choice, enabling UNSAFE to prevail when it is not collectively preferred (leading to a smaller population/social welfare). We show that the range of risk probabilities where the social dilemma arises depends on many factors, the most important among them are the time-scale to reach supremacy in a given domain (i.e. short-term vs long-term AI) and the speed gain by ignoring safety measures. Moreover, given the more complex nature of this scenario, we show that incentives such as reward and punishment (for example, for the purpose of technology regulation) are much more challenging to supply correctly than in case of cooperation dilemmas such as the Prisoner's Dilemma and the Public Good Games.
Proceedings Papers
. isal2021, ALIFE 2021: The 2021 Conference on Artificial Life84, (July 18–22, 2021) 10.1162/isal_a_00417
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life248-250, (July 13–18, 2020) 10.1162/isal_a_00281
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life402-410, (July 13–18, 2020) 10.1162/isal_a_00292
Abstract
View Paper
PDF
Indirect reciprocity is an important mechanism for promoting cooperation among self-interested agents. Simplified, it means you help me, therefore somebody else will help you (in contrast to direct reciprocity: you help me, therefore I will help you). Indirect reciprocity can be achieved via reputation and norms. Strategies relying on these principles can maintain high levels of cooperation and remain stable against invasion, even in the presence of errors. However, this is only the case if the reputation of an agent is modeled as a shared public opinion. If agents have private opinions and hence can disagree if somebody is good or bad, even rare errors can cause cooperation to break apart. This paper examines a novel approach to overcome this private information problem, where agents act in accordance to others’ expectations of their behavior (i.e. pleasing them) instead of being guided by their own, private assessment. As such, a pleasing agent can achieve better reputations than previously considered strategies when there is disagreement in the population. Our analysis shows that pleasing significantly improves stability as well as cooperativeness. It is effective even if only the opinions of few other individuals are considered and when it bears additional costs.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life331-332, (July 29–August 2, 2019) 10.1162/isal_a_00183
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life163-170, (July 29–August 2, 2019) 10.1162/isal_a_00157
Abstract
View Paper
PDF
When starting a new collective venture, it is important to understand partners’ motives and how strongly they commit to common goals. Arranging prior commitments or agreements on how to behave has been shown to be an evolutionary viable strategy in the context of cooperation dilemmas, ensuring high levels of mutual cooperation among self-interested individuals. However, in many situations, commitments can be used to achieve other types of collective behaviours such as coordination. Coordination is arguably more complex to achieve since there might be multiple desirable collective outcomes in a coordination problem (compared to mutual cooperation, the only desirable outcome in cooperation dilemmas), especially when these outcomes entail asymmetric benefits or payoffs for those involved. Using methods from Evolutionary Game Theory (EGT), herein we study how prior commitments can be adopted as a tool for enhancing coordination when its outcomes exhibit an asymmetric payoff structure. Our results, both by numerical simulations and analytically, show that whether prior commitment would be a viable evolutionary mechanism for enhancing coordination strongly depends on the collective benefit of coordination, and more importantly, how asymmetric benefits are resolved in a commitment deal.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life316-323, (July 29–August 2, 2019) 10.1162/isal_a_00181
Abstract
View Paper
PDF
The design of mechanisms that encourage pro-social behaviours in populations of self-regarding agents is recognised as a major theoretical challenge within several areas of social, life and engineering sciences. When interference from external parties is considered, several heuristics have been identified as capable of engineering a desired collective behaviour at a minimal cost. However, these studies neglect the diverse nature of contexts and social structures that characterise real-world populations. Here we analyse the impact of diversity by means of scale-free interaction networks with high and low levels of clustering, and test various interference mechanisms using simulations of agents facing a cooperative dilemma. Our results show that interference on scale-free networks is not trivial and that distinct levels of clustering react differently to each interference mechanism. As such, we argue that no tailored response fits all scale-free networks and present which mechanisms are more efficient at fostering cooperation in both types of networks. Finally, we discuss the pitfalls of considering reckless interference mechanisms.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life143-144, (July 29–August 2, 2019) 10.1162/isal_a_00153
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life135-142, (July 29–August 2, 2019) 10.1162/isal_a_00152
Abstract
View Paper
PDF
Spending by the UK’s National Health Service (NHS) on independent healthcare treatment has been increased in recent years and is predicted to sustain its upward trend with the forecast of population growth. Some have viewed this increase as an attempt not to expand the patients’ choices but to privatize public healthcare. This debate poses a social dilemma whether the NHS should stop cooperating with Private providers. This paper contributes to healthcare economic modelling by investigating the evolution of cooperation among three proposed populations: Public Healthcare Providers, Private Healthcare Providers and Patients . The Patient population is included as a main player in the decision-making process by expanding patient’s choices of treatment. We develop a generic basic model that measures the cost of healthcare provision based on given parameters, such as NHS and private healthcare providers’ cost of investments in both sectors, cost of treatments and gained benefits. A patient’s costly punishment is introduced as a mechanism to enhance cooperation among the three populations. Our findings show that cooperation can be improved with the introduction of punishment (patient’s punishment) against defecting providers. Although punishment increases cooperation, it is very costly considering the small improvement in cooperation in comparison to the basic model.