Can Ubimus Technologies affect our Musicality?

Recent works recognize musicality is based on and constrained by our cognitive and biological system. Taking in account a concept from cognitive science ‐ cognitive offloading ‐ as a principle for technology‐supported musical activities, in this paper we discuss some suggestions (guidelines) to be taken into account when designing, developing and evaluating computer music technologies, especially those related to ubimus. We believe that Ubimus technology can shape the way we think about music and have a positive (or negative) influence on our musicality.


Introduction
Music is universal and, although there is no inter-culturally correct definition of music, several presumptive universals indicate that musicality is a prominent and distinctive characteristic of humankind (Trehub et al. 2015). Indeed, we think everyone is a skilled and sophisticated musical listener. Even those who consider themselves to be "unmusical" or do not themselves produce music have implicit knowledge of their culture's musical forms and styles (even if they cannot be expressed explicitly). For instance, all individuals have an implicit understanding of the melodic, rhythmic, and harmonic regularities of their cultures' music, in the same way, they unconsciously know the structure and rules of our native language (Honing 2018).
Since our cognition is influenced, perhaps determined, by our experiences in the physical world, our goal is to investigate how musicality is affected by the technology we are using (and building) to support our musical activities. It is not a pioneering work: some time ago, Malloch and Wanderley (2017) adopted an embodied cognition perspective to investigate DMIs design and performance; technology has created new modalities for our everyday activities. The growing man-machine integration results in a broader understanding of human experience and cognition. The same way McLuhan (McLuhan and Lapham 1994) discussed how communication technology, i.e. all kinds of media, mainly electronic media, becomes technological extensions of our bodies, configures the awareness and experience of each of us and also affects our cognitive organization, we think that the use of ubimus technology could equally shape the way we think about music and have a positive (or negative) influence on our musicality.
In the last years, we have been particularly interested in investigating how we can provide better technological support for musical activities and also in making the Computer Music-HCI connection more explicit. Here we aim to discuss how Computer Music -in particular Ubiquitous Music -could borrow a number of Cognitive principles from an HCI perspective; and also how Cognitive Offloading concepts and principles (Risko and Gilbert 2016) could be adapted and used in the design and evaluation of ubimus technologies. By cognitive offloading, we imply the reliance on using a (computer-based) device to reduce the cognitive demand of performing a task.
In this paper, we intend to discuss and present our hypothesis: our musicality is influenced (positively or negatively) by ubimus technology. After presenting some fundamentals of ubimus and some ideas from cognitive and biological traits related to music and musicality, we discuss a number of suggestions (guidelines) to be taken into account when designing, developing and evaluating computer music technologies, especially those related to ubimus. Our final intention is to provide insight into if and how sound cognitive user-centered knowledge can be transferred to the computer music context, particularly adopting the cognitive offloading perspective.
The paper is structured as follows. First, we present a panoramic view of ubimus technologies. Second, we summarize some works relating cognitive and biological concepts to musicality, aiming to use them as fundamentals of our approach . Third, we propose a number of suggestions -inspired from these fundamentalswe consider useful to design, develop and evaluate computer music technologies. The last section contains a final discussion.

Ubimus technologies: a panoramic view
The rise in popularity of touchscreen smartphones and mobile apps and rapidly advancing networks, mobile processors, and storage technologies led to a convergence where smartphones replaced mobile phones, organizers, and portable media players as the single device most people carried. Such advances in computing technologies and products, which are integrating more and more functionality and continuously changing at an increasing rate, have been a motivating force behind changes in media arts theory and practice, demanding broader and challenging approaches. As we move from desktop personal computers, firstly to a multiplatform and distributed Internet, with its web and rich applications, and now to mobile convergence devices and multi-user ubiquitous computing environments, we need to cope with the significance of this media in many people's lives. As more advanced portable devices become available, it is becoming increasingly important to have more sophisticated and, at the same time, more intuitive creativity support tools, particularly allowing users to participate in musical activities such as music creation, performance, experimentation.
We have proposed the adoption of the term Ubiquitous Music (Keller et al. 2014) -or simply Ubimus -to promote practices that empower participants of musical experiences through socially oriented, creativityenhancing tools. To achieve this goal, our group has been engaged in a multidisciplinary effort to investigate the creative potential of converging forms of social interaction, mobile and distributed technologies and innovative music-making practices. One of our goals is to develop tools which take advantage of these inclusive contexts, providing conditions to novices to participate in creative activities, ideally in any place and at any moment. One strategy to achieve this goal is to repurpose everyday consumer mobile devices (devices people already own, and are familiar with) as ubiquitous music interfaces to be used in musical activities, taking benefit from their distinctive capabilities of portability, mobility, and connectivity and above all from their availability to the average person.
Ubimus research has targeted both more accessible forms of music making and a search for new modalities of artistic practice. These endeavors entail a deep understanding of the underlying creative phenomena -both the ones approached by disciplines such as musicology or music cognition and the emerging forms of creativity tied to the intense deployment of technology and involving the adaptive and opportunistic utilization of resources found in everyday settings -i.e., little-c music (De Lima et al. 2017). As in other fast growing fields, there is a tendency to incorporate technological resources without the support of a firm experimental evidence or a consistent theoretical scaffolding. Take for instance the recent emergence of the Internet of Musical Things. This proposal was formulated in parallel by Turchet and Barthet (2017) and Keller and Lazzarini (2017). After several exchanges in the ubimus community, the various acronyms were dropped (IoMT, IoMUT, etc.) and the label IoMusT was adopted. In addition, Ubimus studies present useful directions, such as observing that musicians and naïve users have different requirements in exploratory musical activities (Miletto et al. 2007). Musicians find instrumental metaphors straightforward to use and expressive. Non-musicians do not necessarily share this view. Interfaces based on traditional musical instruments are not rated as expressive and productive when exploring the musical possibilities of a tool. With Ubimus technology, the traditional roles in music creation and performance (composer, interpreter, listener) are changed. Ubiquitous music technology has undergone considerable improvements over the last decade, and now it makes possible the use of ubimus technology as attractive tools for musical accomplishments.
This profile-specific characteristic reinforces the importance of a careful design of user interfaces for musical activities but also the need to search for new theories, concepts, metaphors and patterns in order to find how to adopt these everyday devices and technologies in musical activities.

Musicality: Cognitive and Biological Musical Traits
The ability to rely on the external mind might have detrimental consequences to cognition (Carr 2010) because humans act as "cognitive misers", meaning that people tend to eschew costly analytic thought in favour of comparatively effortless intuitive processing (Barr et al. 2015). Cognitive Miser is a term introduced by Taylor (1981) which refers to the idea that only a small amount of information is actively perceived by individuals when making decisions, instead cognitive shortcuts are used to attend to relevant information and arrive at a decision. Human cognition's miserly nature lends itself to an overreliance on simple heuristics and mental shortcuts (Stanovich 2004;Kahneman 2011). The evidence suggests that excessive smartphone users are genuinely lower in cognitive ability and have a more intuitive cognitive style.
For instance, Sparrow et al. (2011) pointed out that when people expect to have future access to information, they have lower recall rates of the information itself and enhanced recall for where to access it instead. In a musical context, it is known that short-term memory capacity is crucial to the segmentation strategy used by good sight-readers whenever reading a musical score (Bean 1939;Gabrielsson 2003). Sight-reading is especially crucial in the first stage of the musical performance plan while acquiring knowledge about the piece and developing preliminary ideas about how it should be performed.
According to Gabrielsson (2003), it is also in this first stage that the structural analysis reveals the real meaning of the musical information. Such a cognitively demanding task requires a substantial amount of analytical reasoning that, in turn, can be ultimately trusted to our smartphones, as demonstrated by Barr et al. (2015). The second stage -the musical performance plan -involves hard work on technical problems to establish the spatiomotor pattern required to perform the music.
Finally, the third and final stage is a fusion of the two previous stages with trial rehearsals that produce a final version of the performance (Gabrielsson 2003). The last two stages demand executive functioning and anxiety control, and yet, once again, the dependence on smart devices plays a significant disrupting role in performing this task (Clayton et al. 2015;Hartanto and Yang 2016). Despite the evidence, it would be premature to state that the very best technology designed to support musical activities is atrophying our musicality.
Nevertheless, it is possible to approach this issue from a different angle: researching the cognitive and biological traits involved in musical thinking and applying it in the design of new ubimus tech. By doing so, we would lean towards innate and primitive structures related to music-making, which is unlikely to change due to these technologies' behavioural overuse. If musicality can be defined as a natural, spontaneously developing set of traits based on and constrained by our cognitive and biological system, music in all its variety can be defined as a social and cultural construct based on that very musicality (Honing and Ploeger 2012), as will be discussed in the next section.
We all can perceive and enjoy music. Over the years, it has become clear that all humans share a predisposition for music, just like we have for language. To recognize a melody and perceive the music's beat is an example of a trait based on and constrained by our cognitive abilities and underlying biology (trivial skill for most humans). Even infants are sensitive to such features, which are shared across cultures (Savage et al. 2015;Trehub et al. 2015). Other common human traits in musicality reported by Honing (2018) are a) relative pitch (e.g., contour and interval analysis; b) regularity and beat perception; c) tonal encoding of pitch; and d) metrical encoding of rhythm. Until relatively recently, most scholars were wary of the notion that music cognition could have a biological basis. Music was viewed as a cultural product with no evolutionary history and no biological constraints on its manifestation. This explanation is supported by the belief that music has not been around long enough to have shaped perceptual mechanisms over thousands of generations.
Moreover, in contrast to speech, this musical knowledge is acquired relatively slowly and not equally by all individuals of a given nature (Repp 1991). However, such notions do not explain the presence of music in all cultures and time periods, let alone other species. More recently, studies have indicated that our music capacity has an intimate relationship with our cognition and underlying biology, mainly when the focus is on perception rather than production (Winkler et al. 2009;Fitch 2006;Honing 2018). Comparative research shows that although music itself may be specifically human, some of the fundamental mechanisms that underlie human musicality are shared with other species. For Darwin, music had no survival benefits but offered a means of impressing potential partners, thereby contributing to reproductive success. If so, the possibility that these cognitive traits are the target of natural selection (bear in mind that cognitive traits are polygenic). Darwin argued that musical vocalizations preceded language (Fitch 2013). Impressing potential partners may be a feasible purpose for music. However, there are divergent studies on that matter. Other reported purposes for music are: a) the promotion and maintenance of group cohesion, working as a glue that enhances cooperation and strengthens feelings of unity (Merker et al. 2009), b) to ease the burdens of care-giving and promote infant well-being and survival (Dissanayake 2008) (this perspective sees such vocalizations as having paved the way not only for language but also for music (Brown 2011)), and c) that music is a technology or transformative invention that uses existing skills and has consequences for culture and biology (Patel 2008).
While there might be quite some evidence that musicality components overlap with non-musical cognitive features, this is in itself no evidence against musicality as an evolved biological trait or set of traits. It still has to be demonstrated that the constituent components of musicality are indeed domain-specific when identified. As in language, musicality could have evolved from existing elements through evolutionary processes, such as natural or sexual selection. Alternatively, based on the converging evidence for music-specific responses along specific neural pathways, it could be that brain networks that support musicality are partly recycled for language, thus predicting more overlap than segregation of cognitive functions. All in all, a consensus is growing that musicality has deep biological foundations, based on accumulating evidence for the involvement of genetic variation (Särkämö et al. 2016;Oikkonen et al. 2016). Recent advances in molecular technologies provide an effective way of exploring these biological foundations, such as the association studies of genome aiming to capture the polymorphic content of a large phenotyped population sample.

Suggestions for the Design of Interactions in UbiMus
In this section we propose a number of suggestions related to Interaction and Ubimus, eventually affecting the way ubimus technologies can be designed and evaluated. So far, based on the literature reviewed, it has been established that: a) there might be cognitive and biological traits related to musical activities; b) some human cognitive skills could be affected by ubiquitous technology, especially by connected mobile devices. The question then is: how to design ubiquitous technology for musical activities (UbiMus) that makes the most of our innate predisposition to music (musicality) to minimize the detrimental cognitive effects of extensive use of such devices? Historically, digital artefacts were primarily tools intended to be used instrumentally for solving problems and carrying out tasks, and mostly to be used individually. In this scenario, concepts such as user goals, task flows, usability and utility were (and still are) very valuable. However, it turns out that digital technology in today's society is mostly used for communication (many-to-many), entertainment, and pleasure. Here is where user experience design thrives. As the name suggests, user experience design is about designing the ideal experience of using a service or product. It is about how people feel about a product and their pleasure and satisfaction when using it, looking at it, or holding it. Every product that is used by someone involves user experience. Numerous theories, methodologies, and frameworks help designers design products focused on user experience. Though it is not in this work's scope to discuss these, they all suggest paying close attention to user's needs and expected behaviour (e.g. User-Centered Design). The user should be in the center of the designing process; they should be listened to and be involved. Overall, it is essential to consider what people are good and bad at, both in a motor and cognitive level. For that reason, Human-Computer Interaction (HCI) has historically been related to the fields of ergonomics and cognitive sciences (Preece et al. 2015). Next, some important aspects of human cognition related to music are presented, aiming to guide new computer music technology development.

Physicality of the Musical Instruments
One of the essential qualities of a DMI is how it compares to existing systems (Miranda and Wanderley 2006) since this provides much of the context in which both performers and audiences will interpret its behaviour. This may be linked to the design principle of consistency.
Interactions with traditional acoustic and electroacoustic musical instruments can usually be typified as physical manipulation of a vibrating object or objects. Interactions with digital musical instruments are profoundly different from the perspective of control since sensors are manipulated and the virtual vibrating system (or equivalent sound synthesis generator) is modified by the sensor signals only as mediated by a computer (Malloch and Wanderley 2017). This can lead to the loss of the very subjective feeling of actually objectifying the sound by the musicians. Although DMIs are typically tangible, they may lack a clear physical relation between physical and digital representations. The production of structured communicative sounds by striking objects with limbs, other body parts, or other objects (e.g. "drumming") appears to constitute a core component of human musicality with clear animal analogues (Fitch 2015).
Past experiences, preconceptions, opinions and tastes of performers and audience influence their perception of the interaction. Furthermore, audiences see the interaction between a musician and their instrument in a given context. Thus, musicality or expressivity are not inherent in an instrument, but it emerges in the performer's interaction with this instrument (Malloch and Wanderley 2017).
There is an intimate relation between human cognition and manipulation of the body and objects in the physical environment. For example, we rely on smartphones and search engines to store and retrieve information. In other words, we often think using our bodies and the external world. One critical function that these mind/body/world interactions afford is cognitive offloading -the use of physical action to alter the information processing requirements of a task in order to reduce cognitive demand (Risko and Gilbert 2016).
As noted by Malloch and Wanderley (2017), DMIs do not necessarily need to comply with physical laws (vibration of air columns, strings, membranes or bars); therefore, the connection of the interface (or input device) and the synthesis algorithms is somewhat arbitrary. This fact poses a challenge to the designers since acoustic instruments include specific properties due to their materiality, limiting its sound scope. There are "natural" mappings between bodily energy exertion and the resulting sound, either by plucking a string or hitting a drum with a stick (Magnusson and Mendieta 2007). Furthermore, the haptic feedback, which is determined by acoustic instruments' shape and physicality, is entirely missing in the digital counterparts (Bech-Hansen 2013). In acoustical instruments (the human voice included), the production of vibrato is related to the amount of force applied to the instrument, which gives the player a sense of the instrument's state. This is not the case with software and DMIs.
One of the most profound differences between traditional and digital musical instruments is that DMIs are generally reprogrammable. Since the mapping between gesture and sound is implemented in software rather than in a physical structure, DMIs (or software instruments) tend to be too broad to get to know it thoroughly ("frightening blank space," meaning the endless possibilities). The perception of making the physical object vibrate and feel the sound source directly and naturally is something that computer systems lack. Playing digital instruments seems to be less of an embodied practice (where motor-memory has been established) as the mapping between gesture and sound can be easily changed. In a survey with over 200 musicians, Magnusson and Mendieta (2007) reported that participants expressed the wish for more limited expressive software instruments, i.e. not software that tries to do it all but "does one thing well and not one hundred things badly." Another relevant aspect regarded the physicality of musical instruments is a concern with its perennity and durability. It is frustrating having to deal with updates, fixing, compatibility issues, and the overall uncertainty of the continuation of commercial digital instruments or software environments. An acoustic instrument does not have a due date, and it will not become updated with the next year's release. They have been around for so long that it is probable that one can find all the support and spare parts one needs in a local music shop. It is about credibility, an essential value of the user experience.
So, does that mean there are only positives with the acoustic instruments? Surely not. When the sound production mechanics are tied with the interface, the instruments may not prioritize the ergonomics, leading to discomfort, errors, and injuries. Modern technology has made it possible to separate the user-interface from the sound production mechanism of an instrument. As a result of this separation, the user interface can be optimized for usability, while the sound production of the instrument can be separately optimized without the need for constraints such as keeping the pipes close enough for a musician to be able to reach each pipe or each finger hole with her fingers (Mann and Steve 2007). Despite the advantages, the price paid for separating the user interface from sound-producing medium is decreased physicality. The synthesizer fundamentally changed the haptic aspects of musical performance by virtually eliminating it. Simultaneously, however, the synthesizer also augmented the sonic vocabulary, paving the way for new musical expression through sounds and timbres (Bech-Hansen 2013).
Musical instruments should ideally produce high-quality sounds, while not being too challenging to play, be durable, and relatively affordable (Manchester 2006). Manipulating an acoustic musical instrument for an extended period of time will almost certainly lead to injuries resultant of the accumulation of micro traumas when the human physiological limits are exceeded. In fact, the problem is so severe that 48-66% of string players report injuries severe enough to interfere with their ability to perform (Shan and Visentin 2003).
We summarize three suggestions related to physicality of musical instruments: a) to invest in a durable design using good quality materials that evokes desire in the users -not a toyish design; In fact, a redesigned augmented instrument would be preferred; b) to limit the features of the instrument in a way that it would have an easy learning curve but incorporates deep potential for further explorations, so it will not to become a boring; c) to avoid the temptation to just copy the interface of the acoustic instrument on a new digital counterpart. It is positive to incorporate familiar materials (i.e. guitar strings) and make use of the known haptic vocabulary (i.e the vibration of the string when plucked). Do it after ergonomic studies and evaluation with actual users to avoid possible injuries.

Establishing a reference to be imitated
(True) imitation is innate. It is well-developed in humans being observed in new-borns babies both for fostering learning and for yielding pleasure. There is a distinction between imitation that copies the task structure and hierarchical organization, and imitation that copies movements. True imitation focuses on the goal; in other words, the execution of the action as a function of the goal (Byrne and Russon 1998). In a musical context, it can be approached from different viewpoints, such as imitation skills, musical figures, imitation of symbols, imitation of moving sonic forms (corporeal imitation), and imitation of group behaviour (alleloimitation). Playing a musical instrument starts with the imitation of low-level skills and low-level challenges. However, as skills improve, the challenges can rise to a higher level. When skills and challenges are in equilibrium, this gives rise to an optimal experience or pleasure. Learning to play a musical instrument is, therefore, a typical example of true imitation. It draws on the ability of the student to focus on what is essential in the teacher's example. Even if the instrument is not the same, it is still possible to imitate particular behaviours and playing styles because the student has more focus on the goals and less of a focus on the precise movements. However, the student's ability to see the movements and gestures of the teacher may be an essential component in learning to play a musical instrument. The visual observation of expressive movements may facilitate the mirroring of the teacher's intentions to the student's intentions (Leman 2008). The role of mirroring in music education has been confirmed by a brain imaging study (Buccino et al. 2004). Musical instrument performance was used to show that the decomposition into elementary motor components was encoded by the mirror neurons. When the action to be imitated corresponded to an elementary action already present in the mirror neuron system, this act was forwarded to other structures and replicated. In that case, no learning was needed (Leman 2008)]. The conclusion here is that, in order to capitalize on the innate human capability to imitate, the designers of a computational performance tool must take into account ways of facilitating this process of true imitation either by providing key examples as well as awareness of the performer on actions compared to others. Guidance and a reference are needed.
We summarize three suggestions related to usage of (true) imitation as a strategy to musical learning and communication. A) Implementation of comparison mechanisms that allows casual players to match their performances against more skilled ones; b) Tutorials with professional players (setting an idol reference) in immersive learning environments (multi-angle) may contribute to a faster development of the required motor skills; c) adoption of elementary motor gestures and movements in an effort to reduce motor-related learning making sure, however, the movements are meaningful and not conflicting with one another or established patterns. Barr et al. (2015) studies suggest that people who think more intuitively and less analytically when given rea-soning problems were more likely to rely on their connected devices, suggesting that people may be prone to look up information that they actually know or could quickly learn, but are unwilling to invest the cognitive cost associated with encoding and retrieval. In that sense, a possible approach is to offer building blocks that simplify a set of complex tasks that can be looked closer whenever the user feels prepared to do so. Those abstractions can represent performer's actions, musical structures (i.e. arpeggios), emotional intention, improvisation strategy (with the use of AI), riffs, samples, rhythmic patterns, etc. Not only should these blocks be readily available and searchable, but they should also be suggested based on context and user profiling. Note, however, that music is still believed to be mostly a matter of the "intuitive" right brain -the avatar of emotion and creativity (Repp 1991). If that so, chances are the user will never look into those building blocks since one might choose not to engage in costly elaborative encoding, as they know that knowledge can be procured externally. Therefore, besides being an excellent approach to manage frustration and anxiety, building blocks strategy might not be ideal, for example, for musical learning software.

Building blocks
We summarize three suggestions related the building blocks approach: a) improved search engines based on musicality: vocalization/humming searching and/or rhythmic searching based on physical relation between the physical and digital representation (i.e. drumming). Also, the results could be sorted based on the user's skill level. Mainly, do not offer too much too soon nor too little too late; b) Intelligent recommendation systems: autocomplete musical phrases, intelligent harmonizers and timbre recommendation system based on a particular musical style and user profile; c) performance-related assistance such as adaptive accompaniment systems or an intelligent "autopilot" mode. Whatever the level of the assistance offered, the design must also reflect a well-thought balance between assistance and

Movements and Gestures
As previously mentioned, people's engagement with music is in a way similar to how they engage with other people (Leman 2008): the process of music appreciation-although also including cognitive appreciation and interpretation -is firmly based on mirroring body movement. To sound natural in performance, expressive timing must conform to the principle of human movement (Honing 2003). People's tendency to move in synchrony with auditory rhythms is known as the ideomotor principle: perception of movement will always induce a tendency to perform the same or similar movements (Knuf et al. 2001) . The effect is observable in the tendency to tap along with the beat of the music (Leman 2008), including moving with the instrument up and down while playing (Wanderley 2002). The beat is the most natural feature for synchronized movement because it appeals to fundamental biomechanical resonances (Hallam et al. 2005). In this regard, Knuf et al. (2001) ran a comprehensive study on ideomotor actions and verified that movements did not always occur without awareness, but they did occur without awareness of voluntary control. They have also found clear evidence that people tend to perform the movements they would like to see (intentional induction), whereas results are less clear regarding perceptual induction (movements that people see). Perceptual induction could only be verified thru noninstrumental effectors: in their experiment, the effect appeared for both head and foot. For hand movements (the instrument effectors), intentional induction is much more pronounced than perceptual. Corporeal articulation is also related to musical expressiveness (Wanderley et al. 2005) and can be seen as indicators of intentionality (studied as seen above in terms of this mirroring process) (Leman 2008). In general terms, movement in response to music is often seen as a gestural expression of a particular emotion (sadness, happiness, love, anger) that is assumed to be imitated by the music Friberg and Sundberg (1999). Therefore, a fundamental question is: if articulations are a kind of expression, how do they relate to expressiveness in music? (Leman 2008) Bear in mind that the expression of emotion is only one aspect of corporeal involvement with music since corporeal articulations can also be used to annotate structural features, such as melody, tonality, and percussion events. In summary, our gestures and movements give away our intentions and must be considered when musical activities take place.
Regarded to mappings of gestures and movements, four suggestions as: a) Movements are important for controlling rhythmic information, both during the performance and for data input. Hand and upper body limbs (instrumental effectors) movements are preferred; b) Movement is also importnat to control dynamic and other parameters of musical expression. Other forms of body language, such as face expression detection, may also leads to good results; c) Do not impose a set to gestures and movements that does not conform to our existing sensorimotor body movement skills. Movements that represents manipulation skills should be avoided (i.e. air guitar). It might be a good idea to observe and learn personal movements of a particular user during a musical performance; d) Perceptual induction could be used as a way to get engagement feed-back from the audience. Todd (1992) defends the principle that performance, perception of tempo and musical dynamics are based on an internal sense of motion. This principle reflects upon the notion that music performance and perception have their origins in the kinematic and dynamic characteristics of typical motor actions. For example, regularities observed in a sequence of foot movements during walking or running are similar to regularities observed in sequences of beats or note values when a musical performance changes tempo. A shared assumption from these lines of works is that we experience and make sense of musical phenomena by metaphorically mapping the concepts derived from our bodily experience of the physical world into music. Accordingly, listeners hear the unfolding musical events as shaped by the action of specific musical forces that behave similarly to the forces behind our movements in the physical world such as gravity and inertia (Dogantan-Dack 2006). Baily (1985) even argues that the performer's internal representation of music is in terms of movement, rather than sound. Honing (2012) defends that regularity and beat perception is one of the human's innate musical traits. ten Cate et al. (2016) go beyond studying in different species, both the ability to recognize the regularity in the auditory stimulus and the ability to adjust the own motor output to the perceived pattern. Although rare, this ability appears to some animals as well. Human rhythmic abilities obviously did not arise to allow people to synchronize to metronomes but rather to other humans' actions in groups, known as social synchronization. Thus, by the ecological principle, the concept of mutual entrainment among two or more individuals should be the ability of central interest rather than BPS to a mechanical timekeeper (Fitch 2015). In the wild (i.e., outside the lab), the mutual entrainment of two or more individuals by such a mechanism obviously cannot occur unless they themselves are capable of producing the evenly paced entraining stimulus of some kind (such as clapping, stomping, or drumming) within the tempo range of its predictive timing mechanism (Merker et al. 2015). The most sophisticated form of synchronization involves beat-based predictive timing, where an internal beat is tuned to the frequency and phase of an isochronous time-giver, allowing perfect 0-degree phase alignment. This stimulus makes the very next beat in the sequence predictable, allowing the timing mechanism to align-or latch-its produced behavioural to the stimulus with zero, or even small negative (anticipatory), phase lag, typical of human sensorimotor synchrony (Fraisse 1982). Because of reaction time limitations, it cannot, therefore, be based on responding to that stimulus event. Instead, it requires a predictive (anticipatory) and cyclical motor timing mechanism that takes an evenly paced stimulus sequence as input. Naturally, reaction times too predictable stimuli are shorter than those to unpredictable ones; hence preparatory cues such as "ready, steady" or a drummer's count down allow quicker responses.

Tempo, timing, regularity
Related to ability to recognize and synchronize with auditory signals, it is suggested: a) Make use of auditory predictive cue to synchronized-demanding actions. Sound has been largely used as a way to call for attention in interactive systems, however in interactive systems for music performance this approach must be used very carefully not to disrupt the system main usage. That said, it is possible to lean on our innate ability encode metrical information, such as rhythm, to call for the user attention when some action is required. For example, we can use syncope or other rhythmic cues as a feedback for the user. In part, this is a very common strategy used by DJ's when mixing electronic musical styles. b) Offers support to rhythm-related activities when BPM drops below 70 (i.e. a visual representation of the tempo). c) People are good at synchronizing with other people, not machines. Tangible User Interfaces are a good option to provide support for more than one person to explore the interface together, using the innate ability of synchronization.

Tonal hearing
Humans readily recognize tone sequences that are shifted up or down in log frequency because the pattern of relative pitches is maintained (referred to as interval perception or relative pitch). Humans also have a sensitivity to the spectral contour of musical signals (like birds), but relative pitch is the preferred mode of listening for humans. The tonal encoding of pitch also seems to be another innate human ability regarded to music. There is substantial empirical evidence that listeners use this tonal knowledge in music perception automatically. Central to pitch organization is the perception of pitch along with musical scales. A musical scale refers to the use of a small subset of pitches in a given musical piece. Scale tones are not equivalent and are organized around a central tone, called the tonic. Usually, a tonal musical piece starts and ends on the tonic. Tonal organization of pitch applies to many music types, but it does not occur in processing other sound patterns, such as speech. Although the commonly used scales differ somewhat from culture to culture, most musical scales use pitches of unequal spacing that are organized around 5-7 focal pitches and afford the building of pitch hierarchies. The tonal encoding module seems to exploit musical predispositions, as infants show enhanced processing for scales with unequal pitch steps (ten Cate et al. 2016). Innate musical predispositions imply specialized brain structures to deal with music: tonal encoding can be selectively impaired by brain damage; for example, after damage, some patients are no longer able to judge melodic closure properly. A recent functional neuroimaging study has identified the rostromedial prefrontal cortex as a likely brain substrate for the' tonal encoding' module (Fitch 2015). Musical memory is also organized, at least partially, independent of the hippocampus -the brain structure central to memory formation. It is possible that the enormous significance of music throughout all times and in all cultures contributed to the development of an independent memory for music (Merker et al. 2015). After all these findings, we need to rethink the specificity of music related brain cells may be not fully exploited by the traditional (ubimus) design decisions.
We summarize two suggestions related to tonal and pitch encoding. a) recommendation systems based relative pitch (contour perception) and tone detection (i.e. autocomplete musical phrases based on scales); b) improved perceptual models based on out most sensitive frequency range.

Discussion
Music, as we know it, has been done in the same way for centuries. Musical instruments have indeed evolved to provide a better (not ideal) fit to our body, sound quality has been improved, new musical styles appeared while others died, and the musical industry reinvented itself several times over. Nevertheless, the cognitive and biological traits that support all this activity, from producing to enjoying music, have been mostly the same. Evolution on this matter is slow. The fact is, humans excel at doing music this way. With the popularization of computer technology, much progress had to be made in the field of HCI to allow the computers to be usable by nonspecialized regular people. Currently, computers (in all its modern shapes and forms) are involved in virtually all activities known by human beings, music included. A relevant change of paradigm in HCI was Weiser (1991) vision of ubiquitous technology (UbiComp).The readings of UbiComp concepts and technology by fields such as Music and Arts led to the birth of a subfield so-called Ubiquitous Music (UbiMus); This field is concerned with ubiquitous systems of human agents and material resources that afford musical activities through creativity support tools. In practical terms, ubimus is music (or musical activities) supported by ubiquitous computing concepts and technology. It relates to concepts such as portability, mobility, connectivity, availability (including for non-musicians).
More recently, another emerging research field positioned itself at the intersection of the Internet of Things, new interfaces for musical expression, ubiquitous music, human-computer interaction, artificial intelligence, and participatory art -it is known as the Internet of Musical Thing (IoMusT). From a computer science perspective, IoMusT refers to the networks of computing devices (smart devices) embedded in physical objects (musical things) dedicated to the production and/or reception of musical content. Possible scenarios for IoMusT are augmented and immersive concert experiences, audience participation; remote rehearsals; music e-learning; and smart studio production (Turchet et al. 2018). These are fascinating research areas, but the big question is: are we cognitively equipped to make the most of it? Could we thrive in this new way of making music, or will it become "yet-another" short-lived cool interface for music-making? Is this new technology paying attention to the way we do things, considering what we are good and bad at? Clearly, there is an intrinsic exploratory value in these initiatives, and it will undoubtedly lead to unpredictable artistic outcomes, but this is insufficient to answer the questions above. Moreover, is there a price to pay when we rely so heavily on such tech to perform an activity such as music-making? One of the most pertinent questions for the 21st century is how these increasingly intelligent and invasive technologies will affect our minds. The term "extended mind" has been used to refer to the notion that our cognition goes beyond our brains and suggests that individuals may allow their smart devices to do their thinking for or with them ( Barr et al. 2015). The ability to rely on the external mind might have detrimental consequences to cognition (Carr 2010). This is yet to be examined in close detail in the musical context and this can only be done by applying some of the proposed suggestions discussed in 4 in the design of new music technologies and validating them with the users. This paper discussed and presented some innate abilities involved in musical activities that -in the authors' viewpointcould be considered in the current designs of digital musical instruments and ubimus technology as a whole. Even though further experimental work is needed, recent biomusicology and neuroscience findings indicate musical activities may have idiosyncrasies that are not covered by the traditional approaches of interaction design and user experience design.