![]() |
COMPUTATIONALINTELLIGENCEAND NEUROSCIENCE |
“Computational modelling for studying brain and behaviour"Chris Christodoulou is a Professor at Department of Computer Science, University of Cyprus after joining in 2005. He received a B.Eng. degree in electronic engineering from Queen Mary and Westfield College, University of London, in 1991, a Ph.D. degree in neural networks/computational neuroscience from King’s College, University of London, in 1997, and a B.A. degree in German from Birkbeck College, University of London, in 2008. He worked as a Postgraduate Research Assistant (1991–1995) and a Postdoctoral Research Associate (1995–1997) at the Centre for Neural Networks, King’s College, University of London. He has taught as a Lecturer (Assistant Professor) at Birkbeck College, University of London (1997–2005). He has been a Visiting Research Fellow at the King’s College (1997–2001). Since 2005, he has been a Visiting Research Fellow at Birkbeck College. His current research interests include computational neuroscience, neural networks, and machine learning. On these topics, he authored more than 30 papers in major journals and more than 70 papers in full conference proceedings and edited books. Moreover, he guest-edited six journal special issues (four in BioSystems and two in Brain Research) on the topic of neural coding. He also successfully coordinated a number of funded research projects, supervised 7 postdoctoral researchers, and 5 PhD students who successfully completed their PhDs.
Professor Chris Christodoulou Department of Computer Science University of Cyprus
Reinforcement Learning and Neural Network applicationsIn the area of Reinforcement Learning (RL) we have been working on various topics with our contributions being: (a) the investigation of multi-agent RL in the IPD with spiking/non-spiking agents exploring and identifying the conditions to enhance its cooperative outcome, showing also that spiking/non‐spiking agents have similar behaviour, (b) the design of non‐linear local RL rules through neuroevolution and the demonstration that they are effective in both partially observable and nonstationary environments, (c) the achievement of behaviour plasticity through the modulation of switch neurons by investigating arbitrary, nonstationary binary association problems and partially observable T‐maze domains with an arbitrary number of turning points. We have also been applying novel Neural Network (NN) techniques to a number of problems including: (i) Protein Secondary Structure Prediction (PSSP) where novel ways of training bidirectional recurrent NNs were developed including the scaled conjugate gradient second order learning algorithm. We also applied convolutional NNs to the problem, the Hessian-free optimization (HFO) algorithm, as well as embeddings input from language models with second order deep learning. We also compared filtering approaches for the PSSP problem. (ii) Detection of Alzheimer disease in brain MRI, where we applied convolutional neural networks with HFO and we managed to accelerate training compared to first order algorithms like Adam or stochastic gradient descent. |
Computational Neuroscience: Studying the BrainWe have been working on several topics on Computational Neuroscience which aims to elucidate the principles of information processing by the nervous system. Our work, from a bottom-up approach, i.e., from computational modelling of neurons first, has a number of contributions which can be identified in the following: (a) the problem of understanding neuronal coding (i.e., how the brain encodes/decodes and transmits information), where we contributed by: (α) proving and identifying that partial somatic reset is the most likely mechanism for producing highly irregular firing at high rates in cortical cells, while showing that with concurrent excitation and inhibition firing only becomes nearly irregular, (β) proving that high firing irregularity enhances learning, (γ) developing a measure which uses the normalised slope of the depolarisation of the membrane potential shortly prior to crossing the firing threshold, for estimating the degree of synchrony driving the neuron relevant to the firing of response spikes and inferring its operational mode, i.e., whether a neuron behaves as a temporal integrator or coincidence detector of random input spikes, and quantify their relative contribution to firing, (δ) developing a spike-based measure based on a decomposed form of the response-stimulus for inferring the neuron’s operational mode. (b) the partial somatic reset or inhibitory inputs are two mechanisms which can control the neuron’s gain, (c) the precision of pyramidal neurons is not high enough so as to learn temporally precise inter‐spike delays with the use of coincidence detection and input correlations in synaptic plasticity and input selectivity, (d) the development of a leaky integrator type neuron model separating dendritic and somatic integration and using stochastic synapses which (α) allows different firing variability levels to be produced and (β) can be used as a motion and velocity detector system, (e) the development of a biophysical model of endocannabinoid‐mediated short-term depression of inhibition/excitation in hippocampus enabling the understanding of the role of cannabinoids in learning and memory.
(iii) Automatic landmark location for analysis of cardiac MRI images using NNs for predicting the positions of all required landmarks based on learning the geometry of a small number of manually located landmarks. (iv) Face recognition. (v) Speaker identification for security systems. (vi) Earthquake prediction using either time series magnitude data or seismic electric signals. (vii) Fraud detection by firms in which auditors are planning an audit. |
Computational Neuroscience: Studying BehaviourOur Computational Neuroscience work from a top-down approach involves studying of self-control and commitment behaviours through computational modelling. Self-control can be defined as choosing a large delayed reward, while precommitment is the making of a choice with the specific aim of denying oneself future choices. Self-control problems suggest a conflict between cognition and motivation, which has been linked to competition between higher and lower brain; in particular, the limbic system is activated by decisions involving instant rewards, whereas the prefrontal cortex is activated when anticipating large delayed rewards. Our main contribution was the design and implementation of a computational model of self‐control behaviour with neural networks as two players (representing the higher and lower brain systems viewed as cooperating for the benefit of the organism) learning simultaneously but independently, competing in the iterated prisoner’s dilemma (IPD) game. With a technique resembling the precommitment effect, whereby the payoffs for the dilemma cases in the IPD payoff matrix are differentially biased, we showed that increasing the precommitment effect increases the probability of cooperating with oneself in the future. We also added in our self-control model the concept of emotions, defined as states elicited by positive and negative reinforcers. By increasing and decreasing the reinforcement signal values in the IPD payoff matrix in-between the rounds, we simulated the increment or decrement of positive or negative emotions’ intensity and thus the effects of the presence of emotions, rather than the emotions per se. Our results reflect the restorative role of positive emotions on self-control, the necessity of negative emotions for successful self-control and self-control impairment due to intense negative emotions. Furthermore, our results reveal the importance of parameters in self-regulation, such as the intensity of emotions and the frequency it changes. We are currently investigating with our computational model the relationship between self-control and consciousness.
|
|
SELECTED GRANTS
|
SELECTED PUBLICATIONS
|


