Monotonicity in fuzzy modelling and data mining
Amir Hussain教授


Abstract: Multi-modal cognitive informatics is a rapidly developing discipline, bringing together neurobiology, cognitive psychology and artificial intelligence. Springer Neuroscience has launched a journal in this exciting multidisciplinary field, which seeks to publish biologically-inspired theoretical, computational, experimental and integrative accounts of all aspects of natural and artificial cognitive systems. In this talk, we outline a proposal, inspired by the seminal work of the late Professor John Taylor, to create a future cognitive machine equipped with multi-modal cognitive capabilities. Recent work at Stirling University has explored the application of multi-modal Big Data cognitive computing to solving challenging real world applications. Three case studies are introduced. Firstly, on-going research into cognitively-inspired multi-modal speech perception has led to the development of a novel fuzzy-logic based audio-visual speech processing system. The proposed framework exploits cognitively inspired use of both audio and visual (lip-tracking) information, with potential applications in next-generation multimodal hearing aids and listening device technology. Other work has focused on open-domain sentiment analysis of natural language text using sentic computing: a novel multi-disciplinary paradigm, which is based on the semantic, latent and implicit meaning of natural language concepts, and implicitly exploits the psychologically inspired notion of dual (unconscious and conscious) processing. Ongoing extensions of this work include a cognitively-inspired emotion recognition system based on multimodal input, including text, audio and facial information, which is shown to significantly outperform state-of-the-art uni-modal and bi-modal systems. A third strand of on-going interdisciplinary research investigates autonomous vehicle control, in the two challenging problem domains of planetary rovers and smart cars. The proposed framework is aimed at developing a multi-modal vision-based cognitive control system exploiting a psychologically motivated dual-process switching model and basal-ganglia inspired ‘soft’ selection and on-line learning of multiple-controllers. We present a brief summary of these interdisciplinary research areas, and outline possible parallels, links and some future research directions and challenges.