In my mail today: “[…] Thank you for your application to join the Computing and Digital Economy Research Institute. I am pleased to inform you that your application was successful and you are now a member of the Computing and Digital Economy Research Institute. […]”
Work
work-related posts
PhD Scholarship
I have just been successful in securing funding for a PhD studentship. The project is going to be in the area of ‘Shaping the Future of the Intelligent Home’ and is for three years. The scholarship will be advertised in September for start this autumn in January.
Some notes on the project:
Computing devices are becoming ubiquitous. Even though chip implants are not imminent, a multitude of computing devices are being integrated into many consumer electronic products and home appliances, thus becoming part of everyone’s home. Many of these devices will attempt to assist human beings in various aspects of everyday life. Examples of this kind of functionality include the ‘suggestions’ given by modern digital video recorders, also called personal video recorders (PVRs), such as the Virgin TiVo; location-based alerts given by mobile phones; and advice based on the data provided by Nike trainers and communicated to MP3 players and watches. These developments are rapidly affecting our lives and the places we visit regularly. The home is one of the first areas targeted by numerous companies in the home entertainment and communication sectors. Initial developments are in the areas of entertainment (recommendations and defaults in TV viewing by exploring and learning of habits, semantic search for similarities, lighting preferences), e-commerce (assisted home shopping and automated supply control), and energy-saving (automated dimming of lights and setting of the heating according to habits, combined with energy saving patterns).
The home of the future will use a variety of sensors to perceive and learn about the habits of the person(s) using any of the room(s). Partial information will be gathered and put into a larger context by communicating agents. These agents form a multi-agent system in which agents communicate to exchange information to achieve the desired goal by picking suitable plans (rules) and making them the current intention. Several intentions can be active concurrently in the pursuit of a number of (sub)goals. Agents also might have to work together (form a coalition) to achieve the goal(s). Changes in the environment, such as the availability of sufficient resources, can lead to the necessity of dropping an intention and picking an alternative plan.
The user profile will be updated continuously in order to give informed advice and suggestions. The user(s) will not need to interact with the learning system, but – especially in the initial learning phase – it will be beneficial if the user can rate the suggestions made by the intelligent home to accelerate the learning process. Modern, flexible work patterns impose the necessity to move house more often than previous generations. To accommodate this development, it is essential that the information gathered is separated into location-specific information and more general information that would be portable and might even be carried on mobile devices (or chip implants) to be available to the user at any time and at any location. For the home of the future, this could accelerate the learning process in a new environment, since some location-independent personalised patterns would already be available. Potentially, this information can also be used for personal assistance in mobile devices.
This scenario poses some fundamental research questions:
• How can artificial intelligence in general and agent technology in particular be facilitated successfully for assistive devices in a typical home setting?
• How can habits be perceived using affordable technology?
• How can life-assistance agents in general, and home-assistance agents in particular, be implemented according to ethical and legal requirements?
• How can data protection be guaranteed?
• How can unwanted interference with the various sensors, computing devices, etc. be avoided or be kept to a minimum?
• How can the system react optimally and reliably in the case of insufficient resources or other changes in the environment?
• How can conflicting interests of family members and intentions competing for resources be dealt with?
Facilitating techniques known from artificial intelligence (including games AI), learning theory, and agent programming, the aim of the project is to develop algorithm prototypes for an assisting home of the future. Applicable techniques include resource-based and location-based reasoning, rule-based systems, neural networks, search strategies, and a variety of logic and agent-oriented programming methodologies. These well-explored techniques can be combined to form framework for a ‘personal assistants’ in the home of the future.
LAM’12 deadline approaching
The 5th International Workshop on Logics, Agents, and Mobility (LAM’12) will be held at the University of Hamburg in conjunction with the 33rd International Conference on Application and Theory of Petri Nets and Concurrency.
The deadline for submission of papers (including work-in-progress reports and surveys) is 15 April 2012.
Submissions should not exceed 15 pages, preferably using the LaTeX article or LNCS class. The following formats are accepted: PDF, PS.
Please send your submission electronically via the EasyChair-LAM’12 site.
Find out more about LAM’12 and the LAM workshop series at http://lam12.wordpress.com
Guest Editor for Fundamenta Informaticae
Fundamenta Informaticae will publish a special issue dedicated to the best papers of LAM’10 and LAM’11. The issue is scheduled for Autumn 2012 and will contain extended and revised versions of papers presented at the workshops as well as other original work related to Agents, Logics, and Mobility.
I will act as guest editor for this issue with co-editor Melvin Fitting. The official call for papers will be issued later this month.
SE2S04: Marking CW1
Happy New Year to everybody … I have retrieved all submissions from Blackboard and have started marking. All going well, you will receive your marks by the end of January, just as planned.
Roger Needham Lecture 2011
The BCS staged the 2011 Roger Needham Lecture at the Royal Society in London on Tuesday the 1st of November. In this annual lecture the winner of the BCS Roger Needham Award is officially awarded the prize and presents their work to the public. This year’s winner is Prof Maja Pantic from Imperial College London. The lecture on ‘Machine Understanding of Human Behaviour’ presented a decade’s research in the area.
Following an introduction by Prof Jim Norton (president of the BCS), the BCS/CPHC Distinguished Dissertations awardees were announced by Prof Ann Blandford (UCL). The prize was awarded to Daniel Greenfield (Cambridge University); runner up is Vera Demberg-Winterfall (Edinburgh).
The presentation of the Roger Needham Award 2011 to Prof Maja Pantic was carried out by Dr Andrew Blake (Head of Microsoft Research), who identified her a s a driving force in the area of human behaviour recognition. In her excellent lecture on Facial Behaviour Understanding, Maja engrossed the audience with her enthusiasm for the research she has committed herself to. She introduced the audience consisting of academics and representatives from industry to the history of her research in the area of facial expression recognition. This started in an MSc project on static analysis of human facial expressions at Delft. Here, she explored prototypic facial expressions using rule-based systems and was able to distinguish six basic expressions. A total of 45 facial action units directly linked to the contraction of muscles were subsequently established, but not all of these were recognisable with the methods at hand in 2001. The problems related to motion and dynamic expression were not recognisable at the time. This led to the development of a technique called facial point tracking, that has been the focus of Maja’s research over the past 10 years. One of the problems calling for a solution was sudden and drastic head movement. Also, drastic changes in illumination would previously prevent facial expressions from being recognised. Temporal models have been developed to help identify errors due to artifacts by enabling to track actual possibilities given the requirements of the physical muscular movement. Temporal evolution in face videos offers possibilities to detect spontaneous laughter as opposed to acted laughter (real joy versus acted happiness). Affective dimensions of dynamic continuous behaviour then led to multi-dimensional continuous interpretation-space mappings rather than the discretisation used in previous approaches. A new regression method has been established to deal with these.
Maja Pantic concluded her lecture with thanks to her group and all of her previous collaborators, without whom the development of many techniques would not have been possible.
In the question and answer session, Maja competently and enthusiastically answered questions on how successfully people can mask their emotions, the impact of her work in other areas, the extent to which expressions are learnt, the processing power needed for the analysis, and possible extensions of existing speech recognition with her expression recognition techniques? She emphasised that she ultimately wants to help people understand themselves better, in particular persons struggling in social interactions.
The event closed with a vote of thanks by Tom McEwan (Napier University) and a buffet.
The lecture was filmed and is going to be made available on the BCS web site.
(this article will be published in the AISB Quarterly)