Composing aleatoric music through interaction: a composition environment based on interaction with mobile technologies for public spaces

Abstract: Urban public art is an art exhibition held in public places, contextualized with their surroundings and their audience. Technology is a significant trend in public art due to its connection possibilities with human life, fostering different kinds of interaction. In this way, this work presents an installation proposal consisting of an environment for creating collaborative random music from interaction with mobile devices in public spaces. Everyone participating in the installation is a composer and interaction is a chance agent, although it follows John Cage’s compositionmethods. In order to probe technology, we carried out two pilot studies, followed by aworkshop for the installation itself. Those two pilot studies led us to a new version that was put into practice during the workshop. During the workshop, participants’ interaction generated fourteen compositions, and the sounds resulting from the collaborative composition were made available to the public through a website.


Introduction
Public spaces refer to places where people can get along and interact in society. Cattell Cattell et al. (2008) argues that these spaces are a necessary feature of cities and their quality is often perceived to be a measure of the quality of urban life. These spaces offer some facilities and services that Jiang Jiang (2018) divide into four categories as follows: (1) health safety facilities, such as public restrooms, street lamps, etc; (2) leisure facilities, such as newsstands and kiosks; (3) information communication facilities such as traffic signs, bus stations, etc; (4) art services facilities such as flowerbeds, landscape. Urban public facilities, as an integral part of the city, reflect urban development and urban culture. On the other hand, they connect with the urban environment and with people to form a "human-environment" system. When digital media technology is introduced, public facilities and people altogether become a system for an interactive experience of "user behavior-environment-technology" Jiang (2018).
In this context, people's experiences in public spaces can be enriched when we mix leisure and arts facilities with technology to create an interactive environment for all. To foster public interaction in these spaces, public art in a digital context emerges as an alternative given that this type of art emphasizes participant subjectivity, participation, bidirectional, and feedback. It is worth mentioning that digital public artworks differ from traditional public art because the artist does not entirely control the content, and the power of creativity is in other people's hands. In this process, interaction requires that artists delegate their creative role to everyone interacting in the environment Jiang (2018).
In order to create or foster the creation of digital public artwork a diversity of art expressions can be exploited, but artists often choose the music and sounds for their performances in public spaces. Following this preference, sound art or sonic art (SA) is a promising idea as it encompasses very diverse types of creation based on the exploration of sound ("literal or implicit") as the main element, whether in combination with more traditional arts such as painting and sculpture or with digital and interactive media. It is an evolving definition, as it includes more and more emerging genres, in addition to sculptures and sound installations, electroacoustic music, acoustic, algorithmic, computer music, and noise Salazar (2019).
Furthermore, there is also the art of digital media, which is a combination of modern computer technology and contemporary art. There are many theories about the composition of digital art, which are derived primarily from the basis of content. The art of digital media is the product of the intersection between technology and art, with rich form and connotation Vanrolleghem (2015). The composition of digital art is mainly based on the digital media art manifestation that is divided into digital image performance, digital video art, network art, digital game art, digital audio art, and virtual reality Cantafio (2014).
The aforementioned concepts, for the purpose of this work, fall into The field of Sonic Interaction Design (SID). The SID focuses on challenges such as the creation of sound and adaptable interactions that can respond to the stimuli or gestures of one or more users. Another concern of SID is how projected gestures and sound feedback can transmit feelings and experiences in a creative and expressive way. In musical computing, SID has distanced researchers from trivial engineering of the emulation of musical instruments and sounds from everyday life and guiding them towards the investigation of methodologies to support the design and evaluation of interactive sonic systems (Hermann et al. 2011).
Regarding the technology used in SID, mobile devices are the most used resources, they are already disseminated and they also can be explored to provide human-human, human-environment, and human-environmenthuman interaction Matic et al. (2012). Furthermore, they bring mobility, processing power, connectivity, and relatively low cost. When analyzing their multiple functionalities. Mobile devices can be considered generalpurpose and have the capacity to mitigate costs for simple solutions like controlling the reproduction of a player or even more complex like a collaborative music composition through an application like Trackd Trident Cloud Limited (2019).
With these concepts and perspectives in mind, this research is based on some hypotheses. They guide us to establish a more defined direction for the research Arozo (2003). They are defined as: • H1 -Perceived usefulness is affected when there is no collaboration between participants.
• H2 -Intention of Use is affected by cultural influences.
• H4 -Perceived Usability and Ease of Use provide a positive experience with technology even without knowledge of musical concepts.
• H5 -The ability with technological devices and their resources improves personal performance in the use of technology.
Based on the whole perspective of collaborative musical composition and hypotheses mentioned above, this article presents the partial results of ongoing research that intends to observe people's interactions in sound art installations that use mobile devices for a possible interaction model. However, in order to achieve what is intended, it is necessary to develop technologies and design an environment in which it is possible to interact with sound and the environment through mobile devices. Therefore, this work presents the creation of a public art installation that explores SA, which is systematically improved through specific studies and workshops.
The remainder of the article is organized as follows. The Section 2 presents the concepts and theoretical framework that supported this study and its applications in this work. The 3 Section presents the materials and methods used in this research. To provide technological support and enable musical co-creation, Section 4 presents the Compomus application and its versions used during the study. The 5 Section explains the case study which consists of two pilot studies and a workshop held to assess the acceptance of the proposed technology for musical co-creation. Section 6 presents the results obtained during the application of the studies. The discussion about the results of the studies is then presented in the 7 Section. Finally, the conclusions of the study are presented in the Section 8.

Sonic Interaction Design
The way to interact and how to develop the sound design of products or artifacts are studied by Sonic Interaction Design (SID), an emerging field of study that is situated at the intersection of auditory display, interaction design, ubiquitous computing, and interactive arts. SID can be used to describe practice and research in any of the various functions that sound may play in the cycle of interaction between users, artifacts, services, or environments Rocchesso et al. (2008).
A classic example of the SID application was the first iPod model, which used a mechanical scroll wheel as the input device: a rotating wheel to allow scrolling between menu items. The element that replaced the mechanical wheel used on iPods is the clicker: a click sound that provides feedback for movement between menu items. This feature gives a tactile feel to the click wheel (a pseudo-haptic illusion), somewhat similar to the turntable on old phones, making scrolling more expressive and more informative. The click sound is the only sound produced by the iPod outside of the headphones and is generated through a small piezoelectric speaker inside the device Hermann et al. (2011).
New forms of interaction with sound have been presented using technologies, playful, artistic, and creative experiences that are provided through the relationship of art to science and technology. Based heavily on SID concepts, the authors in Elen Nas (2016) developed interactive interfaces that use ReactVision, software to view tags and control sound. From this technology, several concepts of the study have experimented with in different scenarios with individual musical performances, group and collaborative applied in schools and art exhibitions.
In this work the SID is applied as follows: the user assumes one of the sounds available in the application used as his identity in the interaction space, what happens is the "sonification" of the participant. This personification of the sound occurs in every app version solution purposed in this work. The interaction of users with the environment using each one of these versions takes place through the movement, which is captured and processed by the smartphone sensors and sent to the audio server. The now "sonified" user has their sound played in the speakers and can interact with other users while inside the interaction space. In the second version, the user has control of the reproduction of their sound as well as their spatial location to interact with other sounds and the environment. In the third version evolved from the first versions, the participant actually assumes the representation of the sound in the spatialized environment, and when moving it makes the sound also move through the speakers.

Creating Collaborative Music with Emerging Technologies
The human being is very creative and always looks for different ways of making music, whether mixing rhythms, languages or even using different types of instruments. Music composition is a naturally collaborative process, and new technologies and internet infrastructure improvements enable this collaboration in music production and composition to be performed by countless people in different physical locations.
For example, we have multiple apps but in most parts of them, the way to collaborate is online with exceptions of course. We have Soundtrap Lind and MacPherson (2017), a collaborative platform owned by Spotify. Soundtrap is an online Digital Audio Workstation and can be a resource to record music, create beats with an extensive collection of instruments and loops, start a podcast, or share and edit a track collaboratively. BandLab BandLab (2021)is a cloud-based music app that gives access to a mix table to any musicians that want to collaborate with. Like other apps mentioned, BandLab offers plenty of virtual instruments, effects, and many advanced features to make recording a song on a phone as easy as possible. In another way as said before, are exists the exceptions like mSynth Hantrakul (2021) a platform that allows live performances with audience collaboration, mSynth uses Artificial Intelligence to produce new songs with a mix from the other existing instruments. This can be controlled by the audience with an app that uses an accelerometer to provide the mix like a ball being balancing on the board. The app provides some voice samples that pressed by the audience to improve the DJ performance.
Against the current technological trends, our solution focuses on the process, of interaction and collaboration of the participants in the same place as mSynth, but independently, spontaneously, and encouraging communication and organization by the participants themselves. The environment allows users to be autonomous and independent in creating music using their smartphones. The technology, in this case, functions as a means for participants to more easily interact and observe the impact of their actions on the environment.

Real-Time Sound Spatialization
The human being can determine the location of a given sound, due to the hunting ability developed by his ancestors. This ability works by employing auditory indications that are perceived by the auditory system. According to Lord Rayleigh Moore (2003), the interaural time difference (DTI) and the interaural intensity difference (DII) are the most commonly used indications. The DTI indicates the difference in time that the sound wave takes to reach each ear and the DII the difference in intensity. It is understood that the auditory system, based on these indications and more specific ones (such as the order of arrival of the waves and the spectrum), considering a complex sound, determine the position (direction and distance) of the source through a voting system Gerzon (1974).
In acoustic music performance, there are no musicians positioned on a stage, and the reproduction is performed in an arrangement of loudspeakers positioned in the presentation place and around the listeners. In most presentations, it is the composer who controls a sound mixer placed in the center of the presentation location, processing and directing the sound in real-time to speakers and thus shaping the space Garcia (1998). In this way, there is a mobility of sound around the audience, creating an exciting surrounding effect. The audience usually feels that they are immersed in sounds. The artist who controls the direction and mobility of sounds in space through loudspeakers creates the sound spatialization effect (sound diffusion or electroacoustic diffusion).
Real-time spatialization is used in this work as an immersion object for the participant, as well as bringing greater complexity to the composition, exponentially increasing the possibility of repetition of sound movements through the environment in composition, since the idea is that each participant controls a specific sound and not just an artist controls them all.

Music of Chance
Indeterminate music emerged in the 1950s and got visibility with composer John Cage. His first outstanding work using the technique "chance composition" was "Music of Changes" (1951) and it gave him new experiments and forms of composition, as well as several doors for new explorations Rossi and Barbosa (2015).
Composers John Cage and Earle Brown were pioneers to make their unfinished work available for a chance or the desire of the performer. This would be the birth of indeterminate or random music as other authors and composers prefer to call Rocha (2001). In this process of composition, the composer can use several random methods, such as playing dice, coins, making use of mathematical laws of chance, or even transcribing abstract drawings. In performance, the composer can leave certain elements to the artist's discretion, indicating only the approximate tone, and not the precise one, the duration, the intensity, the timbre, and so on; or he may allow the great architecture of the work to be carried out by the artist, granting him freedom of choice in the order in which various sections of the work are executed Hoogerwerf (1976).
As a composition method, the music of chance, aleatoric music, and indeterminate music may bring up some misunderstandings or questions, posing some difficulty in differentiating the terms found in the literature. This is because the terms aleatoric, indeterminate, and chance were introduced by the authors who conceptualize them and define their procedures differently. Although "aleatoric" and "chance" are related terms, Pierre Boulez appropriates the term with a slightly different meaning from Cage's understanding . Still, in regard to these forms of composing, Pierre Boulez and John Cage are considered the best ones and most known for using this methodology for the composition and execution of their works.
In his play entitled "Alea", Pierre Boulez states that the term aleatoric is related to a work in which the interpreter has creative autonomy in interpreting the work, making the "aleatoric" defined by Boulez very close to what John Cage understands by "indeterminate" Rossi and Barbosa (2015). Even so, Cage uses both terms in his works after the year 1951. The term "chance" is used for compositions in which the random processes define the work and "indeterminacy" for compositions in which there is the possibility of being played very differently by an interpreter .
Cage's interest in chance reflected his desire to build music without the intention of directly communicating an idea to the listener; a song in which sounds could be heard simply as sounds, without a priority relationship between them Rocha (2001). In an interview with Jonathan Cott Cage (2012), Cage said he asked himself, " why do I write music? " And during his experiments at college, he came to the conclusion that artists do what they do because have something to say to other people. Even though he did his best to get his messages understood in his music, Cage was not understood. So Cage then raised another question, "why do I still write music in a society where people go in different directions, speaking different languages?" Feeling in a tower of babel situation.
From these statements, we can understand, even if superficially, the essence behind the works of John Cage.
In his compositions, Cage leaves aside "what he has to say", leaving the interpretation to the listener. It is still worth noting that his main concern was the exploration of concepts and the demonstration of processes, rather than the manufacture of beautiful and "hearable" musical goods. Cage (2012). Cage's most impres-sive contributions were the two collections of lectures, writings, and other diverse materials, entitled Silence (1961) and A Year From Monday (1963), in which, with intelligence and resourcefulness, he takes the reader to an extraordinary (and often purposely random) tour of the random mind.
In terms of Cage's work, a "traditional" musical formation can, paradoxically, end up becoming an obstacle to the appreciation of his works. For this it is necessary to have a certain "detachment" from pre-established notions of how music should be and one should put aside what music "conventionally" has been Williams (2013). In the same way, this work must be considered. Composition techniques and everything that was built on traditional musical composition should be left aside in order to observe the final result of the work as well as its creation process. Like John Cage, the concern of this research is not the quality of the product but the path that is taken to arrive at the final artwork.
Finally, we have Mozart's game that is undoubtedly a very interesting creation and uses dice as an agent of chance as well as John Cage's composition technique. Here we have similarities with the adopted procedures, the dice rolling to choose the musical element that will be inserted in the work is one of them. However, the difference between the two is that in the case of Mozart there is a concern with harmony and coherence, in the case of John Cage, these elements are not considered. In Mozart, randomness is controlled and chance only commands which sound elements are chosen to be inserted at the time of the draw and follows a structure that can be pre-defined in a finite number of possibilities and is doomed to the predictable, this concept is better discussed further ahead Trenkamp (1976).

Methodology
The research discussed in this paper is applied due to its perspective of generating a process for collaborative music creation. Regarding its objective, its mainly exploratory, reason why case study was chosen as the proceeding for study interaction and collaboration in aleatory music creation. In this way, a case study was carried out to study musical composition in a scenario in which there is no "central actor", that is, without any particular organization in which one depends exclusively on the collaboration and interaction of the collocated people. The case study was divided into three parts, being two pilot studies and one workshop. This study was carried out in an experimental setting, in an environment simulating an installation in a public space.
The case study consists of a research strategy that allows the researcher to "consider the holistic and significant characteristics of real-life events" Yin (2015), such as, for example, organizational and managerial processes Kohlbacher (2006). This methodology was used based on Yin's claim Yin (2015) that the case study is the most appropriate methodology to use in research where the main questions are "how?" Or "why?". The author also refers to the case study as an indicated methodology for social studies. Merriam Merriam (2008) points out that the researcher shows concern with the social processes that occur in a given context in a more intense way than with the relationship between the variables, with regard to the use of case studies with a qualitative approach to the problem. Silva, Godoi, and Bandeira-de-Melo Godoi et al. (2006) state that the case study is "centered on a particular situation or event whose importance comes from what it reveals about the phenomenon under investigation".
Our case study was divided into three parts, but according to the literature Yin (2015), as we already knew about the need of conducting these three parts, they all had the same planning procedure: the Plan, which consists in deciding the methodology that will be used for the research; o Design or Project, where the units of analysis should be defined and the probable case studies; a Preparation consisting of one or more pilot case studies; the Collection, where the data generated by the pilot study are extracted, and finally the Analysis stage, which consists of an analysis of the collected data. Specifically for the case study discussed in this work, the phases were: Plan: the context is about collaborative sound composition, placing the user as a composer and interpreter of a sound to be built by the individuals themselves collaboratively even if they do not have previous knowledge about musical composition when doing use of the proposed applications. This is a classical exploratory case study.
Design: the aim is to identify improvement points and evaluate users' acceptance of the proposed technology from the collaborative and interaction in creating music. If users enjoy composing a sound even without harmony or synchronism, it is also great because the main objective is enjoyment for collaborating during an experience of a musical composition.
Preparation: it was necessary to have a more solid technology to observe interaction and collaboration. For this reason, it was necessary to carry out pilot studies regarding specific technology issues. In this way, two pilot studies took place. The first one was to observe and explore abilities of interaction and collaboration; the second one was more about the effect of the used technology, whether participants feel comfortable and stimulated in interacting; and the last one was a workshop with a final prototype considering the needs observed during the pilot studies.
Collection: data collection was performed by observing the interaction, post-test questionnaires printed and answered by all pilot study participants after each session. For the analysis of the collected data, each pilot case study had some of its most relevant characteristics observed, and its data were collected and discussed in detail in the next section.
Analysis: it was based on analyses of video recordings, log analyses for participants' movements, and a posttest questionnaire.
In order to carry out all three parts of the case study in different moments, we consider that the material to be used had to be devices easily found in people's everyday life with low costs such as a standard internet router, a generic USB sound card with 5.1 channels, cables, at least four portable loudspeakers, and a notebook. These equipment are the requirements for a quadraphonic musical composition, in which it is possible to explore sound spatiality in this case.
These studies are best described in the following subsections. The two studies were conducted on the same day, and students were divided into groups for physical space issues.

Compomus
This research originated from the demand for a partnership between the Faculty of Arts and the Institute of the Computing of the Federal University of Amazonas. The idea of this partnership was to join the Arts (especially music) with Technology, addressing the problem of the composition of music with the use of Mobile technologies.
For this research, a literature review was carried out to know the state of the art, tools, and technologies that can be used. In this research, tools such as OSC protocol, Pure Data, and approaches such as sound spatialization and Sonic Interaction Design (SID) were adopted. The OSC protocol is an alternative to the MIDI protocol that can be used to communicate between client and server applications and is widely used in applications that need to be controlled in a wired or wireless network. Due to its robustness and lightness in use, in addition to the ease of learning, the Pure Data environment, and programming language was chosen because it provides a visual programming method that facilitates the construction of tools even by those who are not programmers. The principles of SID and sound spatialization were also incorporated into the project after a review of the literature that brought us the perspective of sonification of the participant and immersion in the environment, expanding the theoretical, technological, and artistic possibilities of this work.
The scope of this work is in electroacoustic music (more precisely in live electronics). The interaction of users and their collaboration through their smartphones become an integral part of the work as a source of originality for the composition of musical works intended to be performed in real-time in public spaces. With the participation of an undetermined number of people, the composition happens through the contribution of each person with its sound. Any participants can organize the composition or not, also allowing their intercommunication. All this interaction with the system is processed, reproduced, and recorded in real-time. In this study, interactions are considered as all participant interventions that result in modification of the environment, such as when entering the environment and having its sound reproduced or moving around the environment.
The main concept of the proposed system is to allow users to cooperate in composing a sound through their free interaction with other users, and with the sound, they choose to represent them. The composition space can be a room or an open public space with a limited area. Sound speakers are needed to reproduce the sounds of the users who are within this space. The dynamics of system use is as follows: once a user is within the defined area, their sound is played in a loop, if the volunteer moves away from the assigned space, the system interrupts the playback of your sound. What defines whether or not the sound will be reproduced is the user's distance from the center of the interaction space. The speakers play a critical role in the system as they are responsible for audio feedback to the users. Users entering and leaving the interaction space turn the sound on and off on the speakers.
As previously mentioned, the App Compomus was developed on the Android platform and functions as a controller for an audio server that processes all the commands sent by the App executing the sounds. This audio server was developed in Pure Data and can process the audios of several users at the same time, we did a stress test on the server and it managed to reproduce at the same time about 1000 sounds at the same time and its development took place during a project of master degree Amazonas et al. (2017). This limitation was due to the hardware on which the server application was being executed. The application does not receive streaming from the client application, but commands that inform which sound should be reproduced, and in the case of the use of spatialization it also receives the coordinates that should direct the sound. For this, the audio server uses the Open Audience architecture as a sound spatialization mechanism and the Open Sound Control -OSC protocol to receive commands over the wireless network.
For testing different interaction perspectives, three solutions with different purposes and technologies were implemented for the Android mobile operating system. The PlayStop version detects the user's presence in the space and reproduces the chosen sound without any additional intervention. Playback ends when the application detects that the user has left the defined space. In the fourth version, the JoyStick allows the user, in addition to controlling the sound reproduction, to control its spatial location. In the fifth version called NoGPS, the previous concepts were merged and the participant is automatically detected by the environment, but the movement of the sound, in this case, is done through the detection of its movements by the environment.
A webserver was also designed to control and register users. To support the scenario described above, we have developed a diagram that demonstrates the dynamics of the environment represented in Figure 1, which comprises: A server with a dual function, web server, and audio server. Four amplified speakers connected to the audio server for feedback in the interaction space. A router, which functions as an access point and allows the connection between the application and server. In the PlayStop version, the radius set is calculated by the App that can determine whether or not the user is in the interaction space. When the application detects that it is within the interaction space, the sound chosen by a user is played in the speakers. When the used version is NoGPS, the app can determine the user's estimated location and with this feature, the app has the server sound spatialization ability control.
On the Pure Data platform, a framework for the reproduction and sound spatialization Amazonas et al. (2017) was developed together with Open AUDIENCE architecture for an immersive auralization that provides a different experience for the user Faria et al. (2005). The Ambisonics auralization technique was chosen for use in this work due to its easy implementation, low computational cost, flexible configuration of speakers, capable of supporting several people simultaneously, and good directionality of the virtual sound sources. For the mobile side, the mobile development platform chosen was Android as the most used platform in the world El País Brasil (2019) and for the ease and freedom of development.
In order to get closer to the concept idealized for the developing environment, we developed and idealized some ideas that were not developed because they are not viable in terms of computational or development cost. Other solution ideas were developed as prototypes and tested by the researchers in the laboratory. All tested versions were named with the technology used in them. The first was called Bluetooth Compomus. This version was designed to detect the presence of the participant within the interaction environment using Bluetooth technology. However, this solution was not even launched for a test because the BL scanning time is very long, which made this prototype unfeasible. The second version was called Compomus GPS. This version was developed because the technology update times were significantly shorter, however, this technology also had factors that did not contribute to its development, such as high energy consumption and restrictions imposed on the use of the location. In addition to the inaccuracy of about 17 meters, the GPS module used up the battery of smartphones in a short time, which was not interesting for research. Figure 1: Representation of the basic layout of the Compomus artwork installation and its working diagram.
When the version used is the JoyStick, the defined radius is scorned and from anywhere within reach of the wireless network, it is possible to reproduce the chosen sound besides also be able to direct it to the speaker of your choice. More details of the two versions are presented in the following subsections.
In the previous versions, the sounds made available were of birds from the Amazon region. The idea of using these samplers of bird sounds came up with the aim of trying to recreate a sound environment in a forest. In these new studies, we decided to test the use of different sounds and looked for sites that made generic samples available for free. Therefore, 50 sounds were randomly chosen on the LANDR LANDR (2019) website. Although the sounds used in this study are of the electronic genre, they should not be fixed to the proposal of the environment and any other type of sound sample can be used with the environment. As we mentioned earlier, the concept of the environment is to make a composition based on the concepts of randomness introduced by John Cage, and in this sense, issues such as synchronism and harmony are not being taken into account as they are not part of the concept followed.

Compomus PlayStop
The App Compomus PlayStop is the implementation of the idea initially proposed in the project, to reproduce aleatoric sounds in order to make a collaborative composition. However, during the researches, the possibility of exploring the spatial sound also appeared, being necessary to separate the two solutions so that these were evaluated separately. The PlayStop version works pretty simply, in Figure 2 it is possible to check the screens of use of the App where it is first necessary to make a user registration and choose a sound. The second is the main screen of the App in which the feedbacks are presented to the user. The "Nothing playing" message is displayed when the user is not inside the interaction space and the "Playing your sound" is displayed when the app detects that the user is within the defined area. At any moment, the participant can change its sound. For this, there is a button on the main screen that allows such action. The third screen has the list with the available sounds.
In this version, the participant of the composition needs to move, leave the space defined to stop the reproduction, which is intended to stimulate the movement between users. There exist another way to stop the sound. The already known is the environment exit and the other is by clicking to change the sound, the reproduction is stopped but is resumed when the new sound is chosen.

Compomus Joystick
The Compomus JoyStick is the version of the App that explores sound localization as an artistic element. In which case the user will have control to move the sound in an environment with scattered speakers. The JoyStick version has differences in functionality concerning the PlayStop version since it does not require the user to move through space but provides a great immersive experience through the sounds Figure 3. The dynamics of using the App in this version are very similar to the PlayStop version, the initial screen and register works in the same way as the previous version, as well as in the selection screen of a new sound. The difference is in the main screen (central screen), where a button is available to play and one to stop the sound reproduction, and a directional joystick that controls the movement of the sound. There is also the sound switching button that allows the user to change the sound at any time.

Compomus NoGPS
This version has characteristics of the previous two, in the NoGPS version the user is automatically detected by the system and the chosen sound is reproduced instantly. This version still explores sound spatialization, but in a different way than the JoyStick version. This version uses indoor location technologies that allow the user to manipulate the location of their sound according to their location in the environment. Figure 4 on the second screen shows feedback on the user's estimated position on a Cartesian plane and his distance from the center of the environment. The focus of this version is not only to encourage the user to move around the environment but also to make the user experience the sound immersion of the environment and to realize that their actions interfere in the environment and thus perceive the movement of their sound more easily.
The NoGPS version was named this way because it does not require any location assistance normally obtained by GPS. Figure 4 shows the main screens of the version, the first and the last screen remain identical to those of the versions previously presented. As stated earlier, the differential is given in the middle screen, which displays feedback on the user's approximate location in a Cartesian plane as well as their distance from the center of the environment.
The reason that the coordinates are shown on the user's screen is due to the reason that in this version we tried to provide visual feedback to users. A screen projected on a wall with an auxiliary web application written in JavaScript receives location data from users and generates on-screen the "marks" left by users in the environment for real-time viewing.

Case Study
As mentioned earlier, in this section we present the case study divided into a three-part study. Firstly, as chronologically took place, the two pilot studies are described. They use two different versions of the Compomus app. Both pilot studies contribute to improvements in the app and they culminate in a third study in the form of a workshop, where participants use an improved version of Compomus app.
As this research is exploratory and uses a single case study as its proceeding, although divided into three parts, in all of them we used the same plan and analysis methods. The common aim was to evaluate the results obtained in the analysis and to find out if the technology would be well accepted by the users, the Technology Acceptance Model (TAM) model was used. The TAM model is widely used in the research landscape and has gained relevance due to its main feature restructuration to various samples and different contexts. TAM has a great potential to show the technology use intention's variance Scherer et al. (2019). The TAM model was designed to understand the causal relationship between external variables of user acceptance and actual use of the computer, seeking to understand the behavior of these users through the knowledge and ease of use perceived by them Davis et al. (1989) Rauniar et al. (2014. In the questionnaire elaborated with 30 questions, we used the Likert scale Dalmoro and Vieira (2014), which is a type of psychometric response scale commonly used in questionnaires. An advantage of using this scale is that it provides directions on the respondent's attitude towards each statement. When responding to a questionnaire based on this scale, respondents specify their level of full agreement or total disagreement, based on a statement made in such an instrument. The Likert range has five levels that are defined to capture the points of choice of the users interviewed. In this case, the points of choice are:

PlayStop Pilot Study
The study was conducted within the university premises with the participation of 47 volunteer students from technological majors.
• Preparation: A space of approximately 36 square meters was reserved for our studies as shown in Figure 1. Four portable speakers were plugged through cables to the USB sound card that was used to reproduce the processed sounds in the notebook. This equipment was used as an audio and web server. We also used a standard router, located right in the center of the defined space, which served as the access point used in the study. As this version does not explore the spatiality of the sound, in this study all the sounds were reproduced equally by the speakers.
• Collection: participants were first asked to download the app from the app store. Then they were asked to connect to the network made available by the system's router to register and use the space. Participants were explained how the App works as well as the dynamics of the interaction space process Figure 5. The registration data such as email, name and entries and exits made by the participant were recorded in a log on the audio server, thus generating a database for analysis.
• Analysis:The objective is to find out what are the difficulties that lead to the low use of the system, considering technological and cultural aspects of each one, following guidelines defined by the TAM model. The importance of this research is shown in the need to analyze the human factor regarding the use of technologies, not only as a front line in the use of software but as the main and indispensable agent for the success of this technology and therefore the success of the result generated (composition).
Participants were asked to feel free to interact as they wished. User interactions have been recorded in audio and video. However, only images can be checked on the page created for the demonstration of results in photos and audio Compomus (2019). Users could use the app in the environment for five minutes in each round of the study; at that time, they were free to interact or not.

JoyStick Pilot Study
The study was conducted within the university premises with the participation of the same 47 volunteer students from technological majors. Figure 5: Participants during the study making use of the APP defining composition strategies among themselves.
• Preparation: to perform this study after the use of the PlayStop version the participants were invited to use the JoyStick version of the Compomus in the same space and to use the same structure of the previous study Figure 6.
• Collection: participants were asked to download the app in the JoyStick version. In this case, a new registration was no longer necessary, as both use the same database. As in the previous study, we explained to the participants the dynamics of using the App and how it works. In this new version a new database was created and in this in addition to registration of names, email and entries and exits of participants in the environment, their movements were captured and chosen sounds that could be analyzed later on.
The time available to the participants of this study was five minutes in which it was suggested that they could use the App the way they wanted. An example of using this version can also be checked on the page created in http://compomus.wixsite.com/compomus Compomus (2019).
After each round of use of the Applications, respondents were asked to answer the questionnaire prepared according to their vision and the feelings experienced during the study, were also asked to be as sincere as possible in their answers.

No-GPS Workshop
In this workshop, preparation for the environment was included a feedback element, a projector was made available with a prototype application that received the location data sent by the phones and recorded them on a screen as if it were painting with colored dots. Before the environment was made available to participants, we asked them to download the app from the app store. After that, the purpose of the study was explained, followed by a recommendation that they feel free to interact and instructions on how the environment and app worked.
However, due to equipment overload that was different from the first one due to the processing of several applications at the same time, part of the location logs for some groups could not be registered. Thus, the only data considered in this study was of 19 participants in four groups which were recorded with integrity.
Participants used the environment for approximately five minutes. The objective of this workshop was to observe the behavior of the participants when using technology that focused on simplicity and for that, the user's movement was required to obtain a complete experience and enable new results. Figure 7 illustrates a group of participants using the environment and application during the workshop.
As a result, only four compositions generated by the groups with data recorded with integrity in the logs were considered in this study. The sound compositions in the indeterminate style were recorded in a quadraphonic format that allows a greater immersion both for those who were at the time of recording and for those who have the opportunity to appreciate the works generated that are also available on the project's website.

Result Analysis
For a better understanding of the discussion of the results of the pilot studies and the workshop were divided into two subsections presented below.

PlayStop and JoyStick Results
At this stage, the predetermined hypotheses were verified at the beginning of this section, relating them to the answers obtained in the questionnaire applied to the study participants. This analysis consisted of in-loco observations by the researchers and the documentation by audio and video recordings as well as the application of a post-test user experience questionnaire to record participants' impressions. Two graphs Figure 8 and Figure 9 show the distribution of study participant responses for each version. A more detailed analysis can be given below.   H2: Considering questions 8 to 13, this hypothesis Considering questions 8 to 13 that answer the hypothesis H2 the results indicate your partial confirmation according to the numbers obtained in the answers to question 11. In this question, 51.06% of participants who used the JoyStick version stated that they have an affinity with electronic music since the sounds used are of this style. Already 59.57% of participants who used the PlayStop version also agreed to have a good relationship for style. Identification with the style of electronic music present in the App sounds caused an unexpected positive result among participants in their responses. This result does not fully confirm the hypothesis since a larger number of users with no affinity with the electronic music used for this study were expected. In answering question 8, 46.81% of the participants who used the PlayStop version were undecided when asked if the people in their circle of friends like the same musical styles. This number was 40.43% between JoyStick version users. In answering question 10, about 80.85% of participants who made use of the PlayStop version like different/alternative ways of making music. Already 74.47% of users who made use of the JoyStick version also claim to like different/alternative ways of making music. Identification with the style of electronic music present in the App sounds caused than 50% 3. The results presented are the sum of the answers I agree to and strongly agree with, as well as strongly disagree and strongly disagree.
an unexpected positive result among participants in their responses. This result does not fully confirm the hypothesis since a larger number of users with no affinity with the electronic music used for this study were expected.  The Figures 12, 13 show as an example some of the questions posed to the participants, in question 7 the figure is asked if they felt they were contributing to the composition as a whole; In Figure 13, the concern was to know if even having no experience in musical composition it was easy to make music in this case; Finally, in another question, it was asked whether it was possible to identify the sound that had been chosen among all the others. The results pointing to hypotheses confirmation and comments contained in the questionnaires helped us to learn a lot about how we can improve the proposed technology, however, the quantitative data from the two studies also helped us to make other inferences and observe that we could improve the technology in other aspects. Therefore, for the purpose of comparison with the later versions, the quantitative data obtained in the two studies will be presented.
During the conduct of pilot studies with the PlayStop and JoyStick versions, interactions or sound interventions were captured by the system. Each participant interacted several times according to the version of the app used and this was reproduced on the speakers at the location but was also recorded as a reproduction of the sound in the recording of the composition made at the time of the studies which gave rise to the sound   products in the participants are the creators. In addition, it was possible to record when each participant interacted on a timeline represented in Figure 14 in which participants in groups 1 and 2 left their interactions recorded. This timeline can be seen as a more simplistic version of the view in music production software. When listening to the generated musical product and looking at the Figure, we can see when each sound was played. Figure 15: Timeline of the groups 3, 4, and 5 for five minutes in PlayStop study. Table 1 shows in numbers the number of records made by participants in Groups 1 and 2 in which the most active participant in group 1 was the one with ID 30 recording 237 interactions in the environment. The participant who registered the least interactions was the one with ID 33, who registered only 6 interactions. The average number of interactions by participants in this group was 123.8 interactions in the environment.
In group 2, the most active participant with ID 44 recorded about 1855 interactions in the composition environment being performed. The least active participant was the one with ID 63 who registered only 20 interactions in the environment during his participation. The average of group 2 interactions was around 522.1 interactions in the environment.  In the Figure 15, the timelines of groups 3, 4, and 5 are shown. As in the previous timelines, it is possible to observe the exact moment of the interaction of each participant, it can be seen that some participants took time to carry out some interaction, others only in sporadic periods. In Group 3, the most active participant with ID 73 registered about 1767 interactions while the least active was with ID 74 registering only 53 interactions in the environment shown in In the study that used the Compomus JoyStick version, the graphics are different because this version explores the spatiality of the sound. With that, the interventions performed in the environment, in this case, were made through a Joystick present in the developed application. The sound product in this case supports sound spatialization in 4 channels and in addition to the moment of interaction in the composition timeline, in this version we have sound movement around the environment bringing immersion to those who hear the work. Figure 16: Image of groups 1, 2 interactions recorded in the environment for five minutes in the JoyStick study. Interactions ID Interactions ID Interactions ID Interactions  29  5633 35 2628  56  4045 60 1323  42 92  32  4818 34 2160  41  3679 40 1165  61 44  31  4363 33 1898  66  3354 54 965  22 10  30  4122 26 1666  62  3138 18 365  37  3705 27 1638  63  1927 65 337   Table 3: Number of groups 1, 2 interactions recorded in the environment for five minutes in the JoyStick study.

Group 1 Group 2 ID Interactions ID
The Figure 16 shows the interactions made during the compositions of Groups 1 and 2. In this figure, the participants are represented each with a different color as shown in the image caption. The Table 3shows the concrete data that cannot be seen accurately in the image. Unlike the previous study, in this case, the substantially greater amount of interactions in the environment is notable and it is sometimes difficult to see more details in the graphs. The ease of interaction with the JoyStick can be seen as a reason for this amount of registered interactions. In Group 1, the most active participant recorded 5633 interactions while the least active participant registered 1638, the average registered was 3263 interactions. This facility can also be seen as a disadvantage because although more interactions were recorded and these were made by users, the environment was not well explored. The second group followed the same pattern with the most active user with ID 56 with 4045 interactions and the least active with ID 22 with 10 interaction, the average of interactions was 1572.6.
As in the previous groups 3, 4, and 5 follow in the same way. Group 3 the most active participant with ID 74 registered 3954 interactions, the least active with ID 18 only 317, the average of group interactions was 2884.6. In Group 4, the most active participant has ID 80 and 4666 interactions, the least active has ID 84 and registered 574 interactions, the group average was 2879.8. Finally, Group 5 has as its most active participant the one with ID 95 and 7776 interactions recorded, the least active participant has ID 100 and registered 1418, the group average was 4139.6.   Overall, the results achieved in terms of technology acceptance are positive. The hypotheses are considered confirmed when they reach 50% or more in the average of answers that agree with the applied questions Figures 10 and 11. The interaction with sound through mobile technologies is an artifact with the potential to be explored in the public art field. This good acceptance of the technology permitted us to move to a new version that would better explore sound spatialization and interaction with sounds. From this, we conducted studies with the Compomus NoGPS version, a little more focused on experience and products generated from an interaction more facilitated by technology by those who are not musicians and want to be part of a work generated from the audience itself.

NoGPS Results
The NoGPS version was designed to do something more organic than the JoyStick version and more active than the PlayStop version when it comes to interacting with the environment. With that, during the composition section, we take the participants to explore the sounds and their spatiality when walking around the environment. The compositions generated from this type of interaction can be considered unique and irreproducible, as they are the factor that inserts much more complex randomness and unpredictability in the composition. This leads us to a collaborative chance musical composition using the precepts but with a methodology different from that used by John Cage. In this scenario, the chance's agent is the participants' own interaction with the sound through their smartphones ("instruments") in the environment, either reproducing the sound or simply moving around the environment exploring its spatiality.
As a result, 4 collaborative chance musical compositions were generated in a quadraphonic format using the precepts of musical composition based on chance introduced by Jonh Cage. The sound results can be checked on the website http://compomus.wixsite.com/compomus Therefore first we will present the number of interactions recorded for each participant, as well as the number of sound changes for each participant. Then, the considerations made and the observations made by the participants during the workshop. Figure 18: Image of groups 1 and 2 interactions recorded in the environment for five minutes in the NoGPS study. Figure 18 shows the movements registered through the movement of the participants through the environment. In this first round, groups 1 and 2 were the first to use the environment, each group had five participants in each group. The numbers with their colors in Figure 18 represent a single user identified only by their ID code assigned by the app. Figure 18 was generated from the recovered logs and has a Cartesian plane with X and Y axes ranging from 10 to -10.

Group 1 Group 2 ID
Interactions ID  Interactions ID  Interactions ID  Interactions  163  14976 124 1996  175  4591 172 1914  155  6629 164 354  160  3787 179 48  162  6468  174  2642   Table 5: Number of groups 1 and 2 interactions recorded in the environment for five minutes in the NoGPS study. Figure 18 can visually illustrate the soundtrack left by the participants, however, only Table 5 can show the exact amount of interactions performed. In this table, the number of interactions performed by each participant in separate groups is presented in decreasing order. In the first group, the participant most explored the environment was ID 163, who registered 14976 interactions during his participation in the workshop. The participant with the least record of interactions was ID 164 with 354 interactions recorded. In group 2, the most active participant registered about 4591 interactions during the period that made use of the application in the environment and has the ID 175. The participant that registered the least interactions during his participation was the user of ID 179 with 48 registered interactions.
Finally, to complement the results collected during this workshop, the following figures show the interactions of each participant considering their movement through the environment as well as the sounds they chose. In order to assist the visualization of the data due to the number of items in the legend of some figures shown, Tables 6 and 8 were prepared with measurable data.  axes vary from -10 to 10, in these graphs the numbers and colored dots represent each sound chosen by the participant during its use. To have more details, Table 6 shows in increasing order from the participant who most explored the sounds, that is, the participant who experienced the greatest amount of different sounds until the participant who least changed sounds during your participation in the workshop.
In group 1, the participant who most explored the available sounds was ID 164 using 15 different sounds during his participation in the workshop. The participant who changed the sound the least was the participant with ID 124 using 4 different sounds during their participation. In group 2, the participant who most explored the sounds available in the application was ID 175, using about 34 different sounds during their participation in the workshop. While the participant who least explored the available sounds was ID 179, this participant, in turn, made use of 15 different sounds during his participation in the workshop.  Then, groups 3 and 4 complete this workshop. Group 3 had five participants while group 3 had four participants. Each group, like the pattern carried out in the other workshops, also used the environment during the period of approximately five minutes. As a result, these groups generated two musical compositions in the indeterminate style and in the quadraphonic format.  Figure 21 shows the movements registered by the participants of the two groups, on the left the group 3 and on the right the group 4. Once again, the numbers represent IDs assigned to the participants through the app, and for better visualization, each participant has an assigned color. The interactions are distributed on a Cartesian plane as well as those of the previous group with X and Y axes ranging from 10 to -10. Table 7 shows in numbers and in decreasing order the number of interactions registered by groups 3 and 4 during the period in which they were using the environment. On the left side, the participant who explored the environment the most recorded a total of 5270 interactions and who has ID 184, while the participant Group 3  Group 4  ID  Interactions ID  Interactions ID  Interactions ID  Interactions  184  5270 183 1368  185  6648 188 97  181  2150 180 1079  186  3314  182 1707 187 486   Table 8 demonstrates the quantitative data in addition to Figures 22 and 23 in which it is possible to observe the participants who most changed sounds and those who changed less sound during the workshop. In group 3 the participant who most experienced different sounds was the one with ID 182 changing sounds 19 times, while the participant who changed sounds less often was ID 181 changing sounds 10. In group 4, the participant who most explored the sounds available was the participant with Id 188 who made exactly 22 sound changes during his participation in the workshop. In contrast, the participant who least explored the sounds available in the application was the participant with ID 187 with 11 recorded sound changes. Now let's explain the oval shape present in most of the represented figures is due to a characteristic of the implemented solution and depending on the smartphone used, the registered behavior may be different from the others. Also in this study, a smartphone presented problems with the gyroscope, causing only movements to be registered in one of the axes. At least three smartphones of the same make and model showed anomalous behavior when using the app coincidentally in group 3. The wireless network used did not have Per Musi no. 40, Computing Performed Music: 1-35. e204024. DOI 10.35699/2317-6377.2020 Figure 23: Image of Group 4 participants interaction considering they chosen sounds in NoGPS study.  an internet connection and it was noticed that these phones were always looking for a new network with a connection to the Internet. During the analysis of the results, another behavior was also noticed, during the user's movement, the location information forms a cross, a behavior present in almost all the participants and this was due to the desire to check if the sound was actually being reproduced there in that box being possible to observe graphically.
After each round of the workshop, participants were invited to an informal conversation to express what they thought of the experience they had a few minutes earlier. Among all participants in the conversations, it was observed that there was always someone who could not understand the purpose of the experiment and who was in doubt about its application. This type of doubt was observed among students in the technology area and can be considered natural due to its original course. We conclude that its origin in the field of the exact makes the students think more about evaluating the technology within the artistic experience. We observed that students from areas farther from technology tend to see a possible artistic goal of the workshop better.
During the conversation, they were asked about the experience and the volunteer P02 said: "I thought about the app and the experience is easy to relate to both. In relation to the app, it is easy because you just put the sound on and walk around." Participant P12 said: "The experience was easy, but some phones had problems with the app. So, if it was just the experience, I would have given top marks but I associated it with the app and got a little bit out of it. I analyzed both the environment and the app." Participant P04 also added: "I thought, like, it was quick to learn how to use the app. That I understood, I kept thinking about it. More of an app than experience." Then it was asked about what they were building there at the time if it was predictable and common and if they had an idea of the result. Participant P23 replied, "I thought it was unpredictable because I didn't know what was going on. But it was unpredictable about the experience, but it wasn't a bad unpredictable it was a good unpredictable." The volunteer P03 also replied: "From the point of view of my experience, I found it unpredictable good because I didn't know what was happening, and I saw it happening at the time and it was unpredictable because I wasn't expecting it, and it was a good experience." With these results we can conclude that the participants could be more active and with an organic experience, the numbers showed a considerable activity if we take into account that the participants needed to move around the environment. With this, it is possible to notice that the participants explored the environment and sounds well despite the technical problems experienced by some volunteers. In general, the workshop presented good results, the sound compositions generated are a unique unrepeatable work because even if we wanted to reproduce the same movements it would not be possible due to the complexity of the composition, the sound result of these interactions can be verified on the project website.

Discussion
When we imagine how technology can be merged with art in a participatory performance in the musical context, we find the works of the author John Cage who introduced the technique of random composition. In this technique, the insertion of compositional elements is defined by processes such as rolling dice, mathematical formulas, algorithms, etc. This technique was chosen because it fits the concept designed to be developed in this study. Develop a public art installation in which passersby could interact with the environment to create something collaboratively even without experience in music production or composition using mobile technologies.
In this study, we carried out two pilot studies using the two versions of Compomus designed with different approaches and with the aim of verifying which one would do better in the evaluation of users. Participants used both versions of Compomus. The first version, PlayStop, detects when the user is present in the environment, reproduces the chosen sound in a loop. In the JoyStick version, in addition to the control of sound reproduction, the user also has the power to control the spatialization of the sound and can direct its sound in any direction in a 2d plane.
The PlayStop version obtained a lower amount of engagement in the use of the environment because it is simpler. This version is the one that comes closest to the basic concept of random musical composition because only the sound or effect is reproduced when the participant enters the environment and stops being reproduced when the participant leaves it. In the JoyStick version, in addition to reproducing the participant's sound, a new element was introduced, the sound spatialization which increased the complexity of the process.
The Compomus in the JoyStick version when introducing the sound spatialization opened possibilities of use in different contexts, however, we observed that the movement of the participants by the environment was less. Participants were stationary in most of the study only observing the direction of the sound. We observed that the idea of immersion was interesting as well as the increase in complexity in the composition. 3D Sound makes us feel that sounds cross from side to side, from front to back in an immersive experience when listening to the resulting recording. When we learned that there was little exploration of the environment in the PlayStop version but a lot of movement and in the case of the JoyStick version we observed little movement but with a lot of exploration of the available sounds and sound spatialization, we had the idea of joining technologies and generating a new version for study.
For this new version, what was intended was to provide a more organic interaction in which the participants took the focus off the application and made more use of the environment. In this case, the joystick was set aside and the movement of the participants in the environment influenced the movement of the sound through the speakers. A participatory workshop was held in order to observe the interaction with the environment and the behavior of the volunteers using this new version of Compomus, called NoGPS. In this version, the results showed a great commitment of the participants to use and explore the environment and sounds, which were deficient in the previous versions, despite still presenting some problems reported by the participants.
When choosing a sound and entering the environment, the sound is reproduced and modifies the environment, and so the participant is interacting with the environment, when talking with other participants and perhaps combining something they would be interacting with each other, and moving around the environment making your sound move in the boxes, in addition to interacting with the environment the participant is interacting with the sound. From the point of view of collaboration, these interactions generate something from the spontaneous collaboration of the participants and this product is a piece of music that can be considered random.
From a more theoretical point of view, if we look more closely, two concepts are applied in this work, that of controlled randomness and that of randomness. The controlled randomness introduced by Pierre Boulez has the desire to maintain a logical continuity as if it were a kind of guide Trenkamp (1976). Notational elements allow the interpreter which way to go, this technique is called Stone (1980) as Hardboxes, and each box has a delimitation of the musical material that can be chosen. Pierre Boulez uses this technique in Constelationmiroir, in the third movement of the Third Sonata for Piano, as pointed out by Trenkamp (1976) and Simms (1996). Costa (2009) states that this delimitation defines a previously drawn " map " or as John Cage defines it, a " structure " in which it allows the interpreter to choose between the available paths which will be followed, thus changing the form of the which, according to Cage, are intrinsic components of the work and which will be developed in the structure, in which the paths are subject to choice, but the results are fated to the predictable. In other words, in this technique, the composer provides a flow, a structure, or paths that the interpreter can follow when performing the work.
In our case, the proposed environment has the Compomus application, which has a finite number of available sounds. It is undeniable, from this point of view, that part of the compositional process is controlled by the definition of this set of samplers and sounds that the participants can choose. This is because, unfortunately, we cannot provide all of the existing sounds for this to be a purely random methodology. And even if that number of possibilities is mathematically large, there is still the possibility that the result is predictable.
So that this does not happen easily, we count on the works of John Cage from 1950 as an example in which the use of indeterminacy results in the choice of the interpreter Terra (1999). Bayer Terra (1999) argues that the use of chance does not presuppose a rhetorical order, but an abdication of the order in order for a real chance intervention to occur. Our intention is to move towards the methodology adopted by Cage in the late 1940s, in which the composer ceases to be concerned with the shape of a piece and no longer clings to a structure Costa (2009).
By introducing the element of sound spatiality and the interaction of participants as composers and inter-preters in real-time, the complexity of the compositional process grows exponentially compared to the simplest methods, such as data entry, and it is here that we believe that a real intervention occurs. chance. A dice has only six faces and there is a high probability of repetition of the draw of one of the faces, even an algorithm has a pattern and even if it has an increase in its complexity, it still tends even if in a very large number it falls into a pattern of numbers. However, when we use the organic element of human beings, the pattern can become immeasurable because even if a participant willingly can never interact with the environment in exactly the same way, enter the environment at exactly the same time and perform the same movements with precision. It would be very difficult to predict movements within the plan on a timeline with an indefinite number of participants.

Concluding Remarks
In this study, we present Compomus, a collaborative musical composition environment for immersive interaction with spatial sound designed for public spaces.
To understand the contribution of this work, we must put aside the concepts of traditional musical composition as mentioned above. John Cage uses random processes like algorithms, dice, or currencies as an agent of chance for his compositions. Compomus is based on this method so it has no compromises with synchrony or harmony as it makes use of technology and interaction as an agent of chance.
And the motivation for the use of the composition technique introduced by Cage occurs precisely in the lack of commitment with the traditional techniques of musical composition allows us to deliver the "control" of the composition in the hands of the users in an interactive environment in which the composition and reproduction could happen live so that they could experiment and create their own work. By introducing sound spatiality and interaction with the environment as agents of chance, we increase the complexity of the random method of composition.
Thus, it can be said that one of the contributions of this study is the introduction of the evolution of a method of musical composition in which the complexity of the randomness of the compositional process increases. Another contribution that can be noted is the construction of a decentralized interactive and collaborative music composition environment that meets the concepts of public art in which there is no control over interactions, in this case, there is no composer artist, but yes, several contributing composers.
Although the samples used in this study are of the electronic genre, the environment is not limited to building electronic music, on the contrary, for this, we would have to implement features to ensure the synchrony of the samples, measure and harmony would go against the primary idea of composition based in the Cage methodology. The types of samples can be varied according to the context of the use of the environment, as was mentioned, in previous studies the sounds used were from birdsong samples of Amazonian flora. Those samples used in this work were chosen because they are loopings and are made available in a simple way and for free, but in future use, it could be instruments, car sounds, cats, or city sounds, the possibilities of use are open.
Several technologies were quoted to implement the concept, we brainstormed to think about technologies that could assist in the process and that were feasible to implement and we arrived at the prototypes observed in this study. Pilot case studies were carried out to verify the acceptance of the technology and first impressions of the users, which were very positive according to the responses and feedback are given by the study participants. Then, based on ideas thought throughout the two pilot studies, the idea of merging the two versions was created, creating an immersive environment in which the participant controlled the sound with the movement. After this study, we were able to verify that the environment was more explored by the participants, which aroused their curiosity about what they could do with that technology as can be seen in the generated graphics.
The technology still needs to evolve in terms of usability and user experience, but these are objectives to be achieved in future work, the focus is precisely to make these improvements in the environment and take the application also to the IOS platform. As previously mentioned, this study is part of a research that aims to observe the interaction of the audience for a possible model of interaction in participatory performances based on interaction and to create an interactive environment in which it was possible to create a participatory performance by enabling all users of the environment can be audience and artists at the same time and create something.
As a result, fourteen compositions of random music were generated. Five compositions were obtained in the PlayStop version and are available in stereo format. Another nine compositions were obtained using the JoyStick version and later the NoGPS version in the quadraphonic system. We made it available on the website Compomus (2019) so that the participants and evaluators of this study could verify the results of the interactions for the three versions of the proposed Compomus application.