Wednesday, August 10, 2016

Towards new innovations in legal and regulatory systems

On 4th of April 2016 in the Modeling Meaning and Knowledge series of mini-symposia, Arho Toikka gave a talk on Creating scientific knowledge as a social process. This was preceded by a short tutorial entitled Kuhn’s Structure of Scientific Revolutions and Gärdenfors’ Conceptual Spaces, presented by Timo Honkela. Toikka's presentation was related to his collaboration with Nina Janasik-Honkela. Their work has addressed, for instance, socio-cognitive views on risk assessment when nanoparticles and endocrine disrupting chemicals are considered. Toikka discussed in detail the social construction of knowledge. Toikka pointed out that

  • the relation between knowledge is done by purposeful humans that collaborate with each other,
  • science is a special set of tools that produces ever-improving results, and
  • but the human cannot be defined away.
What results is that science and knowledge itself become objects of inquiry.

In the discussion following the presentation, Timo Hämäläinen mentioned uncertainty, complexity science and the role of wicked problems in policy making. Hämäläinen has written an interesting article on a related topic entitled "Governance Solutions for Wicked Problems: Metropolitan Innovation Ecosystems as Frontrunners to Sustainable Well-Being". Hämäläinen also told about a workshop on Second Order Science that took place in Scotland. Second Order Science takes the complexity of the world as its starting point.



The presentation and the discussions inspired to consider the relationship between regulatory and legal systems in one hand and different paradigms of artificial intelligence in the other. Namely, policy making and regulations are based on explicit rules that are not very unlike from the rules used as representation in the traditional artificial intelligence. The limitations of symbolic, rule-based representations have become more and more obvious since the highest peak of activity of development in the 1980s. A number of rule-based expert systems were developed but they could not reach the ambitious goals set to them. To put it simply, experts cannot explain the principles that they use in problem solving. It also seems that complex expert knowledge cannot be represented as a collection of symbolic/logical rules. This is still disputed even though I have had the chance to work in this area since the late 1980s and early 1990s. Many neural network modeling people have been in this field but at the early stages many such people came from signal processing backgrounds and only later tasks such as natural language processing and knowledge representation using neural networks have become commonplace.

Even nowadays the vast majority of representing legal knowledge and regulations (if not in practice all) is represented in the form of written sentences and rules. As already said, representing knowledge in the form of rules has serious limitations. In order to reach the complexity of real world situations one should not imagine that wicked problems and changing context-dependent situations could be addressed successfully with limited number of simple rules. We live in a complex world: what may work in one context, can be unsatisfactory in another and catastrophic in yet another one. It is known well in design that people formulated solutions so that they take their own needs, experience and understanding as a starting point. Thus the solutions work best for those who are similar regarding the essential parameters. The same holds true for regulations. It must be underlined that it is not by definition fair and right to have the same rules for all. How outrageous this may first sound, this conclusion becomes obvious when one considers different people, different contexts, different times, etc. These kinds of contextual differences are taken into account in many practical regulatory situations. Simplicity is aimed at because keeping track of the effects of the regulations becomes easily too difficult. Here neural digital forms of representations and statistical machine learning methods could come into help. Explaining details needs another story but to put it shortly, these kinds of methods can be used to formulate high-dimensional regulations in a data-driven manner, potentially supported by crowdsourcing. Principles of fairness can be checked and ensured by considering the shapes of decision making surfaces. Moreover, limited number of key points of the decision making surface can be shown to make it understandable. In summary, legal and regulatory systems could be renewed in a radical manner in order to reach fairness and functionality over the whole decision making space, not only in a limited subset of it.

In a separate discussion, Timo Hämäläinen brought up the book "Simple Rules: How to Thrive in a Complex World" by Donald Sull and Kathleen Eisenhardt. The authors argue for simple rules that communicate well and give room for adaptation. Their argumentation is convincing but does not necessarily overrule what has been said above. Regulations adapted using relevant data, sophisticated statistical machine learning algorithms, high-dimensional representations, and powerful computational resources may give rise to similar processes that Sull and Eisenhardt describe, but in a systematic way. This is, of course, a future vision the realization of which takes even decades. Experiments towards this direction could be conducted, however, quite immediately.

Tuesday, January 26, 2016

Artificial Intelligence and Machine Learning for People symposium

On 7th of October 2015, Finnish Broadcasting Company (YLE) showed a documentary program A studio in which artificial intelligence research and applications were discussed. In media it is commonplace to consider the dangers and risks related to intelligent machines. This time, however, the focus was in positive developments, in particular related to language processing and health care. Regarding the first, prof. Timo Honkela (on the left) was interviewed, and an expert view on health informatics was provided by Dr. Jaakko Hollmén (on the right). The TV programme is available in YLE Areena.

As the program gave a chance to discuss these issues only shortly, a symposium called "Artificial Intellgence and Machine Learning for People" was organized at the University of Helsinki on Monday 2nd of Novermber 2015. The symposium was organized in collaboration between experts from the University of Helsinki and the Aalto University.

Timo Honkela provided an overview in his talk "Artificial Intelligence and Machine Learning in the Service of the Good. Professors Liisa Tiittula (University of Helsinki) and Mikko Kurimo (Aalto University) provided views on how to help people with disabilities to follow media. Tiittula as an expert in interpretation and Kurimo as an expert in speech technology described how modern technology can be made to create speech-to.text services, to facitate content description and to facilate communication in general in the case of various disabilties. Professor of language technology Jörg Tiedemann (University of Helsinki) explained how advances in machine translation have made it possible to cross language borders. The coverage of the services have grown significantly thanks to the data driven approached used. Dr. Jaakko Hollmén (Aalto University) described methods used in intelligent data analysis based on machine learning. He used the health of the environment and of people as case studies. Dr. Krista Lagus (University of Helsinki) provided her insights on how data driven approaches can be used to promote wellbeing. Dr. Jorma Laaksonen (Aalto University) explained how machines that have vision can be used to help us.

Liisa Tiittula Mikko Kurimo Jörg Tiedemann Jaakko Hollmén Krista Lagus Jorma Laaksonen

Saturday, October 10, 2015

Klaus Förger: From motion capture to performance synthesis

Klaus Förger defended his dissertation “From motion capture to performance synthesis: A data based approach on full-body animation” in Aalto University, School of Science. As the opponent served Associate Professor Hannes Högni Vilhjálmsson, Reykjavik University, Iceland, and as the custos Professor Tapio Takala, Aalto University. The main topic of the dissertation is how to control the style of moving virtual characters with phrases of natural language. Motion style was a central theme and it was studied in context of a single character as well as within the interaction between two characters. In the lection precursoria, Förger described the background, motivation and main results of the work. The disseratation work includes collaboration within the Multimodally Grounded Language Technology project.

The opponent opened his commentary by stating that he is very excited about the work. A number of general themes related to the work as well as details were discussed. Prof. Högni Vilhjálmsson mentioned that there are 250 joints in the human body to characterize the complexity of human movement. He brought up an giving the virtual body some self-awareness. How to detect unnaturalness was also discussed. In general, how to debug human motion? Förger explained that one option is to measure if the generated movement deviates too much from the examples. Movement styles were discussed in detail. Some styles such as depressed and weak are interrelated styles. It was also discussed how would it be to extending the work to facial expressions. Prof. Högni Vilhjálmsson has been active in applying Behavioral Markup Language (BML). The relatioship between representations such as BML and data-driven approaches was discussed. During the defence Förger visualized the results both by showing exanples by himself as well as by demonstrating the systems that he had developed during the work.

Högni Vilhjálmsson mentioned that the work is highly interdisciplinary and presented an intriguing final question: "What have you learned about life?" Förger answered referring to different points of view and the synthesis of those points of view provides a valuable view on the whole.

Friday, April 17, 2015

Mark van Heeswijk: Advances in Extreme Learning Machines

Mark van Heeswijk is defending his PhD thesis "Advances in Extreme Learning Machines" at Aalto University School of Science. As the opponent serves Professor Donald C. Wunsch, Missouri University of Science and Technology, USA, and as the Custos Professor Erkki Oja, Aalto University School of Science, Department of Computer Science. A large proportion of the research reported in the thesis was conducted in the research group lead by Dr. Amaury Lendasse. Landasse serves nowadays at the University of Iowa.

Extreme learning machine (ELM) is a modification of single layer feedforward neural network. An ELM is based on the idea that the hidden neurons do not need to be trained but can be randomly generated. The name and origin of the method was discussed. The opponent referred to the recent discussions on the fact that there have been related methods before the ELM such as RVFL (Random vector functional linear network). It was concluded that the Van Heeswijk has carefully referred both to the older and more recent work.

The opponent quoted a piece of news according to which the amount of data will soon grow over zetabyte due to the developments related to the internet of things. He also reminded that neural networks have been widely deployd in different areas of society. For example, in the Humane Genome techniques have been heavily applied. In the discussion, a number of topics were covered including the role of randomness, compressive sensing, sparseness, and time-accuracy tradeoff.

Friday, March 13, 2015

Asta Raami: Intuition Unleashed

In the 1970s, I remember wondering how poorly the Nobel Prize winners were able to answer the question on intuition. Each time the moderator posed the same question and the well established researchers in physics, chemistry, medicine and other areas has actually no clue on what intuition is about. Today Asta Raami is defending her thesis closely related to the question. Suitably for the academic context, the School of Art, Design and Architecture of Aalto University, the focus is in how intuition is used in design tasks and how designers can be helped to further develop and use their intuition.

The opponents of Raami's dissertation "INTUITION UNLEASHED – On the application and development of intuition in the creative process" are Charles Burnette, PhD, FAIA, Former Professor and Director, Graduate Industrial Design, The University of the Arts, Philadelphia and Jorma Enkenberg, Professor Emeritus University of Eastern Finland. As the Custos serves Professor Lily Díaz from Media Lab of Aalto University.

In her lectio precursoria, Raami emphasized that designers consider intuition to be the most trustworthy tool in their work. In the beginning of her studies, she was wondering what is the methodological basis of design work and what is the role of intuition in it. Raami did not take a strong stand on the definition of intuition but rather built on its utility. She noted, though, that reasoning faculties are dependent on intuition. In the thesis work, central questions where how intuition can be used intentionally and how that skill can be developed intentionally. In complex tasks such as in solving wicked problems, intuition is central. As intuition is an unconscious process, it can be confused with the results of other mental processes such as wishful or fearful thinking. To study intuiting, Raami had collected various kinds of data that had mostly been handled qualitatively. One of the opponents, professor Enkenberg pointed out in the discussion that a more appropriate characterization for the methodology would be mixed methods research. He also brought up that results in artificial intelligence research could be used to guide studies or to give ideas in this area. Enkenberg reminded that knowledge resides in distributed networks. The central empirical theme in Raami's work has been intuition development. In the process circulating around intention and action, expanding the boundaries of mind, developing perception skills, and developing discernment skills follow each other.

As an opponent professor Burnette called for the clarification of the concept. Raami wished to keep the concept quite open but rather build on its utility. Burnette pointed out some possibilities such as (1) collaboration with neuroscientists, (2) working on the theoretical foundation of intentional intuition, and (3) building models that integrate design and intuition. In his comment, he made, in my mind, a contradiction of terms by referring to verbal intuition as intuitive processes can be stated to be by definition non-symbolic. Burnette mentioned that intuition has raised attention through Kahnemann's recent book but research on dual processes of reasoning and unconscious thinking has taken place much before Kahnemann got interested in it. Moreover, it seems to me that Kahnemann has a biased view, with an overly strong emphasis on the merits of explicit reasoning.

Regarding other contexts than design, Raami did not want to take a strong stance even though she mentioned that the basic model works in workshops lasting for one day and educational processes of several years. In the Cognitive System blog, another thesis related to use of intuition was regarding strategic decision making in companies. In our own work, we have considered intuitive, unconscious processes as implicit reasoning processes as a part of a model of individual and collective expertise.

Friday, August 29, 2014

Svetlana Vetchinnikova: Second language lexis and the idiom principle

Svetlana Vetchinnikova is defending her doctoral dissertation entitled "Second language lexis and the idiom principle" at the University of Helsinki. Professor Susan Hunston (University of Birmingham) serves as the opponent, and Professor Anna Mauranen as the custos.

In the thesis, Vetchinnnikova examines how second language users of English acquire, use and process lexical items. Three types of data were collected from five non-native students: (1) drafts of Master’s thesis chapters ("output"), (2) academic publications a student referred to ("input"), and (3) several hundreds of words the students used in their thesis were presented to them as stimuli in word association tasks. Lexical usage patterns ("output") were compared to the language exposure ("input") and to the word association responses.

As a study to lexical meaning and how meanings are learned, Vetchinnikova refers to a shift of focus in research from explicit to implicit lexical knowledge, considering multi-word units rather than single words and usage-based acquisition rather than explicit instruction. A similar shift has been taking place also in the computational modeling of language learning.

In the thesis, Vetchinnikova mentions that "Corpus Linguistics has made possible to observe language in a way that makes visible the patterns which are otherwise not discernible for human analytic abilities". Furthermore, she refers to Michael Stubbs who has stated in his ICAME 32 plenary talk that Corpus Linguistics enables similar kinds of analytical processes that led Darwin to his theory of species.

A central question in the thesis is related to how a string of words starts to mean something different from what the sum of the individual words comprising it would normally mean. Delexicalisation and the idiom principle are central notions here. The idiom principle, formulated by Sinclair, refers to the idea that a language speaker has available a large number of semi-preconstructed phrases that constitute single choices, even though they might appear to be analyzable into segments.

One of the conclusions of the work is that the idiom principle is available to second language learners to a much larger extent than is usually claimed. It would be interesting to study how these results relate to the attempts to build machine learning systems that learn or detect multi-word expressions, used for keyphrase extraction.

Tuesday, December 17, 2013

Jefrey Lijffijt: Computational methods for comparison and exploration of event sequences

Jefrey Lijffijt successfully defended yesterday, 16th of December, his thesis "Computational methods for comparison and exploration of event sequences". In his thesis work, Lijffijt has developed computationally efficient methods that can be used to compare and explore event sequences such as natural language texts, DNA sequences or sensor data. Central terms in the thesis are burstiness and dispersion that are measures of the variability of the frequency of an event. A event that is bursty or that has low dispersion tends to be frequent in some parts of an event sequence and infrequent in all other parts of an event sequence.

Lijffijt and his colleagues, both linguists and machine learning specialists, have applied the methods developed in the thesis work to data sets from different domains. Text corpora include British National Corpus, Corpus of Early English Correspondence, and the novel "Pride and Prejudice" by Jane Austen. Another kinds of event sequences are the spatial occurrence patterns of nucleotides and dinucleotides in the human reference genomes, and train sensor time series of the Hollandse Brug, a bridge in the Netherlands.

To model the contextual behaviour of words, Lijffijt considers their spatial distribution throughout texts. The primary unit used in modelling is the interval between two occurrences of a word in the texts. Bursty words tend to exhibit long inter-arrival times followed by short inter-arrival times, while the inter-arrival times for non-bursty words have smaller variance. In one of the case studies, the purpose was to test if there are linguistic differences between texts of fiction prose written by male and female authors. The results indicated, in a consistent manner with earlier research, that male-authored fiction is dominated by frequent use of noun-related forms, while female-authored fiction is more verb-oriented. Moreover, the personal pronouns that are overrepresented in male-authored texts are the first person plural forms "us" and "we" and the third-person pronouns "its", "their", and "they", while women overuse the second-person forms "you" and "your" which can have singular and plural referents. One important methodological conclusion was that the choice of the statistical test matters both in theory and in practice.

As the opponent served Professor Bart Goethals, University of Antwerp, Belgium, and as the custos Professor Juho Rousu.

Wednesday, December 04, 2013

Ricardo Vigário: Steps towards understanding the brain using independent component analysis

Aalto University recognizes the excellence of an individual by promotion to a distinguished professor category known as Aalto Professor. Professor Erkki Oja is the third Aalto Distinguished Professor in the history of the university.

A symposium in honour of Aalto Distinguished Professor Erkki Oja was organized on 3rd December 2013 in Dipoli Conference Center, Espoo. Dipoli, known for its special architecture, is often used for local and international events such as ICANN 2011 conference. The symposium was opened by the President of Aalto University Tuula Teeri and followed by Professor Oja’s lecture "40 years of machine learning and pattern recognition". In addition, four scientific presentations were given by colleagues and his former doctoral students, Professors Samuel Kaski and Jouko Lampinen, Docent Ricardo Vigário, and Doctor Matti Aksela.

In his talk "Component analysis - a machine learning and neuroinformatics view", Ricardo Vigário described Erkki Oja's influential works in developing machine learning and pattern recognition methods. Oja's early works include important contributions related to artificial neural networks with publications like "Simplified neuron model as a principal component analyzer" (Oja 1982) and "Principal components, minor components, and linear neural networks" (Oja 1992). Vigário told his own story as a young research who was considering opportunities all around the world. He had a plan to first move to Finland to take the first steps here and then move to the United States to continue for the PhD studies. After some time in Finland, he realized that there is no need to go elsewhere to learn from the best experts - who happen to be in Finland. An interesting coincidence was that the artificial and biological neural network researchers were in the 1990s in the very same building in the Otaniemi campus. This lead to collaboration in which analysis of brain research results was conducted by neurally inspired computational methods. A good example of the successful results was the paper "Independent Component Approach to the Analysis of EEG and MEG Recordings" (Vigário et al. 2000). As latest developments, Vigário discussed research on phase synchrony, e.g., "A comparison of algorithms for separation of synchronous subspaces" (Almeida et al. 2012).

Friday, November 29, 2013

Nick Enfield: Human Sociality and Systems of Language

Langnet is a Finnish doctoral programme in language studies. Langnet Conference takes place in Jyväskylä from 28th to 30th of November. The program consists of presentations by graduate students and plenary talks by invited scholars. The plenaries are given by Wolfram Bublitz (Augsburg University), Christina Higgins (University of Hawai'i at Manoa), Nick Enfield (Max Planck Institute for Psycholinguistics, Nijmegen), and Scott Jarvis (Ohio University).

In 2009, Nick Enfield was awarded European Research Council (ERC) grant to set up a 5-year project under the ERC's Starting Independent Researcher Grant programme. The project 'Human Sociality and Systems of Language Use' has involved extensive fieldwork on several non-European languages. The basic idea has been to test the hypothesis that patterns of language use are universally grounded in social-cognitive interactional propensities. In his talk, entitled 'Human Sociality and Systems of Language Use: the case of "other-initiated repair"', Enfield discussed the background and motivation for the results and presented a number of specific results of his and his colleagues' research.

Enfield started by presenting a transformation of interests in the study of language and cognition. Earlier focus has been in reference and representation. More and more research on language has been considering language as social action and studying sociality of cognition. Enfield's research interests include causal dependencies in semiotic systems, for instance, the interplay between individual cognitive representations and processes, actual communicative interactions, and higher-level systems such as languages. In his talk, Enfield discussed in some detail the relationship between different causal frames or timescales such as diachronic, synchronic, ontogenetic, phylogenetic, microgenetic, and enchronic.

A group of researchers at the Max Planck Institute for Psycholinguistics in the Netherlands, Mark Dingemanse, Francisco Torreira and Nick Enfield, published recently a result that raised considerable international attention. They suggested that “huh?” is a universal expression based on evidence related to language scattered across five continents. Even though Finnish was not among the 31 languages studies, Finnish media suggested that the Finnish expression "häh" is used all around the world.

Enfield discussed in detail the basis for universality behind the "huh" finding. He presented a pragmatic universals hypothesis: While languages are highly diverse, systems of language use are: (1) largely common across language, and (2) may be locally inflected. Enfield referred to the book "Roots of human sociality: Culture, cognition and interaction", published in 2006, that he and Stephen C. Levinson have edited. Potential reasons for universality include (1) natural meaning, (2) processing, and (3) sociality. Reasons for local inflection includes differences in languages/societies as well as in languages. Enfield's upcoming book is on "Relationship Thinking: Agency, Enchrony, and Human Sociality". The book outlines a framework for analyzing social interaction and its linguistic, cultural, and cognitive underpinnings.

Enfield's own empirical specialization is in the languages of mainland Southeast Asia, especially Lao and Kri. His colleagues and collaborators have collected evidence all over the world. The research method includes six main steps: (1) corpus collection, (2) data workshops, (3) coding design, (4) coding work, (5) ensuring reliability, (6) preparation of results.

Monday, November 04, 2013

Sunandan Chakraborty: Big Data Analytics for Development

Sunandan Chakraborty gave a talk entitled "Big Data Analytics for Development" in the HIIT Otaniemi seminar series. Chakraborty s presently doing an internship in the Microsoft Research Cambridge and is a PhD candidate at the Computer Science department of New York University.

Chakraborty's research includes analysis of parallel streams of datasets from the web to infer and forecast macroeconomic and societal indicators. The present the main focus is to extract events and infer spatio-temporal relationship between events using online news articles and other web-based sources. As related work, Chakraborty discussed, for instance, United Nations Global Pulse that gathers timely information to track and monitor the impacts of global and local socio-economic crises, and using Google Earth to study flood hazards.

Potential sources of useful data include news articles, blogs, social media, online image data and mobile call records. These can be used for inference of socio-economic indices. Challenges with data sources include biases, incomplete coverage, influence of personal opions, noise, lacking uniformity in quality, expensiveness, privacy issues, and limited availability. Chakraborty described five projects and their results:

  • development of a location specific summarization tool
  • development of diagnostic tools for online textbooks
  • computing cropland disappearing rate using Google Earth Satellite images
  • extracting structure from unstructured text
  • using mobile apps to collect data

Chakraborty reported results related to the design and use of a system that mines news articles, blogs and other information sources on the web to automatically summarize important climatic and agricultural trends as well as construct a location-specific climatic and agricultural information portal. This is system has been described in the article "Location specific summarization of climatic and agricultural trends" by Chakraborty and Subramanian. The idea has been to collect topic-specific information on, for example, erosion, infertility, scarcity, drought and floods in different locations. The authors have evaluated the system across 605 different districts in India.

An interesting example of a diagnostic tools for online textbooks is described in the article "Empowering authors to diagnose comprehension burden in textbooks". The authors (Agrawal, Chakraborty et al.) mine textbooks for identifying sections and concepts that can benefit from reorganizing. With their method, authors can quantitatively assess the burden that a textbook imposes on the reader due to non-sequential presentation of concepts. They have applied the tool to a corpus of high school textbooks that are in active use in India. This method could potentially be used in a complementary fashion with a related method described in our article "Assessing user-specific difficulty of documents".

Chakraborty discussed the fact that less and less land is used for agriculture. In India, farmers protest land acquisitions. Also climate change impacts on global agricultural situation. In their paper "Computing the rate of disappearance of cropland using satellite images", Chakraborty and his colleagues present a tool that can monitor this change through satellite images. Google Earth offers a huge corpus of satellite images across the globe that they have used in the analysis to distinguish between arable, barren, tree-covered and developed land areas. This application area has, of course, a long history in relation to pattern recognition and image analysis research (consider, e.g., the paper "A Comparative Study of Texture Measures for Terrain Classification" from 1976). Computational sustainability is naturally becoming more and more relevant.

A current project was described in which archived news data is analyzed for event detection and analysis of spatio-temporal relationship between events. The work is currently focusing on India and Indian news articles.

Monday, October 28, 2013

Melanie Swan: Big Data and the Quantified Self

Melanie Swan is visiting National Consumer Research Center in Helsinki. Swan is a Quantified Self and Big Data Research Principle at MS Futures Group, Palo Alto, California. Minna Ruckenstein is hosting the visit and served as the chair of the invited talk by Swan. The title of Melanie Swan's talk was "Big Data and the Quantified Self".

According to Swan, a key contemporary trend emerging in big data science is the quantified self (QS). The quantified self refers to the activity in which individuals are engaged in the self-tracking of any kind of biological, physical, behavioral, or environmental information as individuals or in groups. She covered a number of QS projects and tools including Personal Analytics COmpanion (PACO) and MIT Body Track as well as closely related developments such as internet of things that contributes to the explosion of big data.

Swan discussed various topics in Personal Health 'Omics' and reminded that the fastest growing area in the big data area is human biology-related data. One possibility emerging from the developments is shifting from reactive to active. An opportunity is QS Data Commons where Github has emerged as the de factor platform. Mental performance optimization (mood management apps, etc.) and quality of life development belong to the current QS frontier. As a means for behavior change, Swan discussed Shikake that are sensors and actuators embedded in physical objects to trigger a physical or psychological behavior change.

Big data opens up new methodological opportunities. A traditional example is Google in building services based on unsupervised machine learning modeling of vast text collections rather than relying on traditional artificial intelligence approach. Swan mentioned a number of contemporary topics including:

  • foundational characterization (longitudinal baseline measures of internal and external daily rhythms, normal deviation patterns, contingency adjustments, anomaly and emergent phenomena)
  • new kinds of pattern recognition
  • multidisciplinary models (turbulence, topology, chaos, complexity, etc.)

An interesting these was building exosenses for the qualified self. This leads to extending our senses in new ways to perceive data as sensation. Exosenses serve as quantified intermediates.

In the second part of her talk, Swan concentrated on the social aspects of QS, i.e, collecting and analyzing group data. Underlying trends include growing and aging world population and urbanization.

Towards the end of the presentation, Swan discussed limitations and risks related to big data. An interesting concept that came up is sousveillance which is the opposite of surveillance. In French, surveillance means "watching from above" whereas sousveillance means "watching from below." It seems that sousveillance and general transparency could be a useful counter force or antidote against totalitarianism and big brother activities. Some other means are needed, though, to diminish widely spread categorical thinking that can be a source for many kinds of societal problems, whether top-down or bottom-up. Far too often people base their decisions and actions on too clear cut interpretations. This prevents from reaching good solutions through evolutionary processes. Therefore, analysis of big data needs to deal with the level of interpretation and its complexities including contextuality and subjectivity.

Friday, October 25, 2013

Elizabeth Bradley: Chaos and Control with Applications in Science, Technology and Art

Prof. Elizabeth Bradley, University of Colorado is visiting the Department of Information and Computer Science, hosted by Jaakko Hollmén. Bradley is an editor of the journal Chaos: An Interdisciplinary Journal of Nonlinear Science and has been program chair of Dynamics Days in 2006, International Workshop on Qualitative Reasoning in 2008, and the International Symposium on Intelligent Data Analysis in 2003 and in 2011.

On 24th of October, Bradley gave an invited talk entitled "Chaos and Control". She provided a review of the mathematical theory and computational techniques that are used in the control of chaos, and covered a variety of examples ranging from science and engineering to music and dance.

Bradley started by introducing basic concepts related to the dissipative dynamical systems. Chaos can be defined as complex behavior, arising in a deterministic nonlinear dynamic system, which exhibits two special properties: (1) sensitive dependence on initial conditions, and (2) characteristic structure. Properties of chaotic or "strange" attractors include neighboring trajectories diverge exponentially, covered densely by trajectories.

Bradley emphasized that chaos is not an academical oddity. Actually, nonlinearity and chaos are ubiquitous, e.g., in hearts, brains, populations, planets, black holes, pulsars, flows of heat and fluids, and many other kinds of systems. Key concepts of chaotic systems from the point of view of control include

  • characteristic attractor geometry,
  • exponential trajectory separation,
  • dense attractor coverage,
  • bifurcations,
  • un/stable manifold structure, and
  • unstable periodic orbits

From the point of view of problem solving, denseness indicates reachability. In other words, trajectories on a chaotic attractor densely cover a set of non-zero measure and thus make all points in that set reachable from any initial condition in its basin of attraction. Reachability is nondeterministic and therefore using chaos in control is not for time-critical applications. Bradley told about an early instance of research related to this topic, published as "Using Chaos to Broaden the Capture Range of a Phase-Locked Loop" (1993). Bradley discussed how to target a specific point on the attractor, exploiting sensitive dependence on initial conditions for control leverage and controllability. Here she referred to her MIT PhD thesis "Taming Chaotic Circuits" (1993) and to Troy Shinbrot and his colleagues' work. Bradley also warned about being naive in one's expectations regarding applications of controlling complex chaotic systems. For instance, it is impossible to find out where to make a butterfly to flap its wings in order to control the future development and route of a hurricane.

Towards the end of her presentation, Bradley discussed two applications of chaos theory in art. First, she introduced Diana Dabby's work on generating variations using chaotic mapping. A convincing example was a variation of J.S. Bach's Prelude in C from the Well-tempered Clavier, Book I generated by the chaotic mapping. The details of the mapping technique are described in the article "Musical variations from a chaotic mapping". As a second application in the area of performing arts, Bradley described her and her collaborators' work on dance. She discussed, for example, the paper "Learning the Grammar of Dance" by Joshua Stuart and herself.

Monday, September 23, 2013

Consortium of National Institutes for Health and Wellbeing in Finland

Consortium of National Institutes for Health and Wellbeing (SOTERKO) was formed to improve the quality and efficiency of reseach and development among three institutes under the Ministry of Social Affairs and Health in Finland. The activities include expert networking, joint research and development programs. The Finnish Institute of Occupational Health, National Institute for Health and Welfare and the Radiation and Nuclear Safety Authority participate in the consortium. In a recent resolution on comprehensive reform of research institutes and research funding, deeper, network-based collaboration was required, crossing the boundaries of government agencies and public bodies. The activities begun under, e.g., SOTERKO will be developed and expanded in order to improve the quality, productivity and impact of research and consultancy.

SOTERKO organized its first research seminar day on 23rd of September, 2013. The event was opened by Pekka Puska (Director General of the National Institute for Health and Welfare) and Tapani Hellstén (Deputy CEO of KEVA). The seminar takes place in KEVA, former Local Government Pensions Institution.

Raine Hermans, Ph.D., is the director of strategic intelligence at Tekes, focused on impact analysis, innovation research funding, and knowledge management. He has co-authored a book on "Medical Innovation and Government Intervention". Hermans gave a talk on Finnish innovation system in general and discussed issues related to health and wellbeing services. He discussed in detail a model on innovation cluster on health care applications, referring to Hermans, Kulvik and Löffler (2009). The model includes unique factors of production, a learn-and-let-go strategy, supporting industries and a domestic market laboratory.

The results of different SOTERKO consortium projects were described in a number of presentations. Eira Viikari-Juntura (Research Professor, Finnish Institute of Occupational Health) presented a research program on chronic diseases and working life. Marianna Virtanen (Research Professor, Finnish Institute of Occupational Health) provided additional details. Päivi Kurttio (Head of Laboratory, Radiation and Nuclear Safety Authority STUK) a program on risk management. Research on risks includes, for instance, research on indoor air, risks of mining, and risk communication.

Markku Sainio (Adjunct professor, Finnish Institute of Occupational Health) discussed in detail issues related to idiopathic environmental intolerance. A central problem is that there is a large number of potential risk factors as well as symptoms related to the sensitivity. Sainio's focus was in the adaptive processes that may explain why in some cases the fear of the risk creates even a greater problem than the risk itself. In the emotional response, amygdalae have a central role. If a perceived risk is automatically associated with an adverse reaction, dealing with the situation becomes complicated. Future research is needed to help in creating interventions that cover both careful analysis of environmental risk factors as well as deal with adaptive emotional processes that may become the primary concern, for example, because of potentially unnecessary avoidance behaviors. In the first task, modern data analysis and mining methods are importance whereas in the second task cognitive modeling techniques can be useful.

Research professor Jussi Simpura (Research Professor, National Institute for Health and Welfare) presented a report on technological change and the future of wellbeing. The work includes a division into three scenarios ranging from positive through neutral to negative. Adjunct professor Timo Honkela (aalto University) presented comments on the report and gave some optimistic insights on the use of modern and emerging computing technologies in the area of health and wellbeing.

The introductions to SOTERKO programs continued by presentations by coordinators Päivi Hämäläinen (National Institute for Health and Welfare) on digital resources, Päivi Husman (Finnish Institute of Occupational Health) on young adults and and Sakari Karvonen (National Institute for Health and Welfare) on inequality. Research professor Jukka Vuori (Finnish Institute of Occupational Health) also discussed this area in detail including studies on how to prevent exclusion among young people.

Friday, September 20, 2013

Andrea Botero: Expanding Design Space(s)

At Aalto University School of Arts, Design and Architecture, Andrea Botero defended today her PhD thesis "Expanding Design Space(s) - Design in Communal Endeavours". Dr. Monika Büscher (Lancaster University) acted as the opponent and Prof. Lily Diaz as the Custos. Botero started her lectio precursoria by a personal anecdote. She described her experiences giving a birth in a baby-friendly hospital. Here it means Kätilöopisto Maternity Hospital with its family-friendly special unit called Haikaranpesä (Stork's Nest). She concluded the description by stating that giving and assisting a birth is neither a miracle nor a routine thing to do. Many partnerships of many kinds are needed that extend well before and after the birth. A lot can be achieved through companionship and caring. This is issue of caring became one of Botero's main interests in design research.

Botero continued by presenting two case studies of fledgling communities (seniors aging together) and emergent collectives (citizens and city officials linked by locative media). Particular attention was paid on expanding what comprises the design spaces of these communal endeavours to capture a wider interplay of possibilities including practices, partly assembled technologies, as well as developing competencies and social arrangements. Related to the first case study, Botero discussed experiences in developing and using Miina, a web-based everyday life management system for the Loppukiri residents in Arabianranta.

The opponent raised discussions about a number of topics including unintended consequences and the role of designers in collaborative design. The interesting book "Ignorance and Surprise" by Matthias Gross was touched upon in the discussion. Another theme was how to create trust in collaboration. Openess regarding one's goals and motivations was emphasized as an important factor. Regarding solving problems in the world and a designers' role in it, Botero provided a nice formulation. One does not need to go to the people and tell them that they need to save the world as they are doing it already. One can just offer help for them. Starting from Bruno Latour's matters of concern and discussing further Pelle Ehn's and his colleagues' work, matters of care were mentioned and everything that we do to sustain life.

Wednesday, September 18, 2013

Lance Fortnow: Bounding Rationality by Computational Complexity

Lance Fortnow is professor and chair of the School of Computer Science of the College of Computing at the Georgia Institute of Technology. His research focuses on computational complexity and its applications to economic theory. Fortnow and William Gasarch write in a widely read blog on computational complexity. Fortnow is the author of a recently published book entitled "The Golden Ticket: P, NP, and the Search for the Impossible".

Fortnow s visiting Aalto University and gave an ICS Forum talk at the Department of Information and Computer Science at Aalto University School of Science. The title of his talk was "Bounding Rationality by Computational Complexity".

As a general topic, Fortnow showed how to incorporate computational complexity into various economic models including game theory, prediction markets, forecast testing, preference revelation and contract theory. He discussed, for instance, so called factoring game. The factoring game was introduced in the article "An Approach to Bounded Rationality" by Eli Ben-Sasson, Adam Tauman Kalai and Ehud Kalai. The basic motivation of the paper is how a rational intelligent agent should behave in a complex environment, given that it cannot perform unbounded computations.

Another example discussed by Fortnow was weather forecasting. He discussed Sandroni's theorem that has been published in the article "The reproducible properties of correct forecasts". Fortnow continued by described the results published in the article "The Complexity of Forecast Testing" by himself and Rakesh V. Vohra.

Another interesting topic was computational awareness. The amount of unawareness of an object is the time needed to enumerate that object in a certain environment and a context. A context is a topic like "restaurant". The environment would consist of ways to find restaurant including memories, interactions with others, guidebooks, internet, etc.

Fortnow discussed conditions that characterize wise crowds referring to the book by James Surowiecki on wisdom of crowds. Four criteria were mentioned: diversity of opinion, independence, decentralization and aggregation.

Monday, September 16, 2013

Stephen Grossberg: Cooperation, competition, preference, and rational decision making

Stephen Grossberg belongs to the central founders of the fields of computational neuroscience, connectionist cognitive science, and neuromorphic technology. He has studied how brains give rise to minds since he took the introductory psychology course as a freshman at Dartmouth College in 1957. At that time, Grossberg presented the idea of using nonlinear systems of differential equations to show how brain mechanisms can give rise to behavioral functions. Grossberg founded and was first President of the International Neural Network Society (INNS). The formation of INNS soon led to the formation of the European Neural Network Society (ENNS) and the Japanese Neural Network Society (JNNS). With Gail Carpenter, Grossberg developed the adaptive resonance theory (ART). ART is a theory of how the brain can quickly learn, and stably remember and recognize, objects and events in a changing world.

The European Neural Network Society organizes annually International Conference on Neural Networks (ICANN). The first ICANN was organized in Finland in 1991. This year the conference was organized in Sofia, Bulgaria at the Technical University of Sofia in collaboration with Institute of Information and Communication Technologies, Bulgarian Academy of Sciences and Union of Automatic and Informatics.

With Erkki Oja, Karlheinz Meier, Nikola Kasabov, Alessandro E. P. Villa and Günther Palm, Stephen Grossberg was a plenary speaker of ICANN 2013. Grossberg's topic was "Behavioral economics and neuroeconomics: Cooperation, competition, preference, and decision making". The general themes of Grossberg's talk were

  • how understanding human cognition, emotion, and decision making can impact economic theory, and
  • how mathematical understanding of cooperative-competitive dynamics can impact economic theory.

The Nobel prize-winning work of Kahneman and Tversky on Prospect Theory is an illustrative example of the first item. Grossberg's talk showed how properties of cooperative-competitive and cognitive- emotional neural systems that were developed to explain large behavioral and neural data bases exhibit emergent properties that are economically relevant, including results about the voting paradox, the Invisible Hand, how to design stable economic markets, irrational decision making under risk (Prospect Theory), probabilistic decision making, preferences for previously unexperienced alternatives over rewarded experiences, and bounded rationality.

Neural mechanisms that have been selected by evolution because they support adaptive behaviors that are crucial for survival can give rise to irrational behaviors when they are exposed to certain environments. Occasional irrational behavior is the price we pay for adaptive processes: it is a part of the "human condition". Grossberg continued by asking two important questions: (1) What design principles and mechanisms have been selected by evolution? (2) How do certain environments contextually trigger irrational decisions? He provided answers through describing his widely used equations on short-term (activation), medium-term (habituation) and long-term (learning) memory. He showed how a correct form of these equations can help to explain a wide range of data about behavioral economics and neuroeconomics.

Sunday, September 01, 2013

Kalervo Järvelin 60 years: Reseach enabling information to be fully accessible for all

Kalervo Järvelin is a professor at the Department of Information Studies, University of Tampere. He has authored over 200 scholarly publications and supervised close to twenty doctoral dissertations. Professor Järvelin has served the ACM SIGIR Conferences as a program committee member, Conference Chair and Program Co-Chair. He is an Associate Editor of Information Processing and Management.

Driven by the idea that information should be fully accessible for all, regardless of format, language or location, professor Järvelin has conducted research on information retrieval. Specific research topics in information retrieval have included dealing with the morphologic complexity, analyzing the vocabulary mismatch between query and text, developing optimal methods for assessing relevance of search returns, analyzing specific task settings and simulating human information behavior. In 2000, Kalervo Järvelin and Jaana Kekäläinen received the SIGIR Best Paper Award for their paper "IR evaluation methods for retrieving highly relevant documents". This and their other paper "Cumulated gain-based evaluation of IR techniques" have become very highly cited. In 2002, Järvelin chaired the SIGIR 2002 conference in Tampere, Finland. In 2008, Järvelin received the Tony Kent Strix Award in recognition of an outstanding contribution to the field of information retrieval. The latest remarkable international recognition was given by American Society for Information Science (ASIS) for Research in Information Science.

University of Tampere organized a seminar to celebrate professor Kalervo Järvelin's 60th anniversary on Friday, 30th of August 2013. Professor Pertti Vakkari opened the seminar, describing Järvelin's numerous achievements and his positive personality as a colleague. Invited talks were given by Diane Kelly (Charting the Turn: A Brief History of Interactive Information Retrieval), Timo Honkela (Relevance and meaning: Interplay between objective and subjective), and Turid Hedlund (Avoin tiedon saatavuus ja mobiili tietoyhteistyö). Urpu Ilasmaa presented memories from a family album. Ilkka Mäkinen presented a poem that described different aspects of Järvelin's academic and personal profile.