Looking for a place for Sonification: Sons de Silício and the Buzu installation

This paper discusses the concept of sonification aplied to the Sons de Silício exhibition and more specifically to the design of Buzu, an audiovisual installation that generates an auditory image of the São Paulo bus transportation system. Buzumakes perceptible information of both the system’s planning and behavior during a particular week in Oc‐ tober 2017. Thework is an artistic outcome of the InterSCity project, an inter‐institutional research initiative concerning the Future Internet and the Smart Cities. Along with the discussion of the Buzu creative process we will examinemining and processing strategies related to the sonification of big data, data‐to‐sound mapping methods, auditory structure for displaying the material and the public exhibition of the work in the context of an artistic event.


Sonification and sound arts
Sonification is a neologism that indicates the transduction of data, usually numerical, into sound events. The emergence of the term in the 1990s coincides with what has been called the sonic turn both in academic research and in artistic production. Jim Drobnick points out that this attention to sound appears simultaneously as "a site for analysis, a medium for aesthetic engagement, and a model for theorization." (Drobnick 2004, p 10). Since then, the use of sound as a means of producing knowledge in science, engineering and medicine has become recurrent (Bijsterveld 2019), despite the prominence that verbal and visual forms of representation still exert in these fields. In the territory of artistic creation, the growing production in sound art attests to the interest in the aesthetic potential of aural practices.
In the field of art, sonification generally implies an association between science and technology within an interdisciplinary approach. Born and Barry (2010), proposed the term art-science to locate this interdisciplinary association that can be understood from three logics: accountability, innovation, and ontology. This triadic framework suggests that sonification implies a network of legitimacy exchange between music and science (Supper 2014), in which each field benefits itself from the activity of the other. Elsewhere, we have summarized these mutual contributions, discussing the relevance of sonification in musical and scientific practices (Iazzetta and Piccinini 2018) (Arango 2019).
In this article we will approach the intersections between art and science through the perspective of sonification. Our main focus is the discussion of Buzu, an installation work based on a process of sonification of the urban bus network in the city of São Paulo. The work led to several discussions on technical, aesthetic and conceptual issues related to the approximation between art and science. It was also the starting point for the exhibition "Sons de Silício" that brought together more than 20 works of sound art, in addition to performances and workshops.
These two endeavours were carried out by NuSom, the Research Centre on Sonology at the University of São Paulo. The projects were developed in collaboration with InterSCity, a collaborative research project sponsored by the National Science and Technology Institute (INCT) that explores the field of Smart Cities with research in areas such as High-Performance Distributed Computing and Future Internet (Macêdo Batista et al. 2016). Both Buzu and Sons de Silício provided a link between sonification and sound art, connecting artists and scholars interested in acoustic forms of expression.
It is worth mentioning that sonification has been increasingly implemented in projects that shared the challenge of creating acoustical representations of the city. These projects include the displaying of the London subway system information with reference sounds (Nickerson et al. 2017), the exploration of mapping techniques used in urban planning and design (Adhitya and Kuuskankare 2011) and the organization of geospatial data in geographic maps (Brittell 2018). An inspirational project is the Metrosynth (Gover 2018) which creates a sonification system of the Montreal subway system running in HTML5.

Sons de Silício
The Sons de Silício exhibition arose from the common interest of some NuSom's members interested in interactive practices and the development of technologies for artistic creation. These researchers are gathered in GPI -Grupo de Práticas Interativas (Group of Interactive Practices), one of NuSom groups. GPI's actions seek to explore digital prototyping tools for interactive sound devices. The group's work involves carrying out practical projects to create mobile and autonomous devices, based on concepts such as Internet of Things, computational ubiquity and sonification.
During 2018, the GPI sought strategies to approach other groups with similar interests in order to share knowledge about sound generation, processing and interaction. In the process, we found other laboratories at the University doing research on sound from different perspectives than ours. Subsequently, we contacted other studios of artists and experimental initiatives based. With that, we became aware of a very diverse and active community, interested in ways of producing artistic research with sound. This led us to launch a call for the Sons de Silício in October 2018, inviting the community of sound artists to present sound art proposals, experimental musical presentations and workshops. The exhibition was held with the awareness that scientific research should impact a broader sector of the population through a wider understanding of its results. Accordingly, the main purpose was to bring the knowledge produced in the University to the public sphere. The exhibition embraced the artistic results of research projects, particularly those conducted by the partnership between scientists and artists.
The InterSCity project provided the materials for the development of some artistic works, the exhibition's assembly and documentation. The exhibition was the result of a post-doctoral research carried by Julian Jaramillo with the support of University of São Paulo's Music Department and Nusom. It was conceived as the 15th and 16th editions of the ¿Música? Series. This event is hosted by the NuSom since 20065, currently featuring musical and sound art works exploring experimentalisms, the critical use of technologies, the integration among visual, gestural and sonic elements, the adoption of improvisation techniques and the exploration of performance spaces as discussion and reflection points.
Sons de Silício was presented in two editions in 2029. The first edition was held between April 1st and 26th 2019 at Espaço das Artes. This large venue had just been opened in the building that previously housed the USP's Museum of Contemporary Art for more than 3 decades. The program notes for the exhibition included the following text: "Sons de Silício explores the topic of Experimental Lutherie as a unifying concept for music, visual arts and computer science practices, as well as a catalyst for innovative modes of experimentation with sound technologies. During April the Espaço das Artes was the meeting place to reflect on sound, from a perspective enriched by concepts from different fields of knowledge. The exhibition presents works reinterpreting the idea of musical instrument, by turning it into machine, device, arrangement, sculpture, interactive system, resonant structure, data explorer, body extension, channel and experiment" (Arango 2019). In addition to the 24 sound art pieces exhibited at the temporary gallery, 8 activation events where held with performances and workshops. These events encouraged the visitors to exchange experiences and information about sound and sound experimentation, by presenting diverse strategies to intersect art and science.
Due to the repercussions achieved by the exhibition, the NuSom group received an invitation to carry out The second edition of the exhibition took place between the 9th September and the 12th December 2019. During these 3 months, the space was activated on 7 occasions serving to exhibit 17 works assembled on the second floor of the Centro Universitário Maria Antônia, a cultural facility that also belongs to USP which is located downtown São Paulo.
The new exhibition space brought several challenges to the organizing team. Among them it is worth mentioning the limitation of physical space that led the curatorship to renegotiate with some artists new versions that were in accordance with the available space. Of these, it is worth mentioning the work Transduções by the Alessandra Bochio and Felipe Merker who now presented a further development of the work Deslocamentos shown in the first edition. It is important to mention the historical relevance that Maria Antônia's space The work Rádio Libertadora is inspired by a direct action that took place in 1969 as a protest against the actions of the military dictatorship in Brazil. In this action, the national radio was invaded and its transmission space hijacked for the broadcast of a revolutionary manifesto written by Carlos Marighella. The installation used a short-range FM transmitter to broadcast a remix of the original audio with electronic beats. The installation Reverberações do Silenciamento was a collective creation that explores a physical and/ visual representation of the movement generated by the sound waves. A steel spring is suspended and attached to a loudspeaker that reproduces audio at low frequencies below the listening threshold. The audio is the recording of a voice reciting the names of dead and missing people during the Brazilian military dictatorship. This audio was transposed to the sub bass and moves the suspended spring resulting in a movement that gives form to a violent silencing. Both works go beyond the idea of experimentation in lutherie and sonification techniques for a free poetic association of sound art with the collective traumas derived from the military regime.
Seven exhibition activation events were held with workshops and performances exploring digital lutherie techniques of sonic instruments and creation of sound art. In total there were more than thirty artists involved in an extensive program.3 The exhibition space is located in a very culturally rich area and was characterized by being a great celebration of sound invention, stimulating several artists to create original works for that space, as was the case with the SPIO orchestra that created a collective, inspired and multilingual work set in the exhibition space.
While in the first version of the exhibition the central focus was on digital lutherie, in the second version this focus was diluted in more diverse works that ended up allowing a more expanded view about the concept of a musical instrument. During the guided tours of the exhibition, the most heard question was "but is this a musical instrument?" This navigation through the limit of the instrument concept created an environment for reflection on different supports and fed some views on the relationship between technology and sound creation. During the three months of the exhibition, many works had technical problems and needed repairs and maintenance. In some cases, the presence of artists maintaining the installations in the exhibition space ended up creating temporary experimentation laboratories and allowed more movement in the life cycle of the exhibition.

The Buzu installation
Buzu is an audiovisual installation launched at the "Sons de Silício" art exhibition as an artistic outcome of the InterSCity project. The installation proposes an acoustical image of São Paulo by retrieving information of the city public bus transportation system, which includes 2.183 lines. Buzu makes perceptible a dataset created by the InterSCity project, which reports the system's behavior in a particular working week in October 2017. While the purpose of the original dataset was comparing the system behavior with the Easter holiday, in Buzu the dataset is implemented to feed an audiovisual engine created in PureData and Processing. The audio is projected by a quadraphonic speaker system and the visuals are displayed on a central screen. Text based information is presented on four little LCD screens distributed around the central structure. By exploring alternative representations of the city, the project adopts sonification as the main strategy. Figure 5: Buzu (2018) at the opening event of the "Sons de Silicio" exhibition A screen is placed at the center of the quadraphonic space showing the spatial displacement of the bus lines on the map representing the metropolitan region. The map is rendered with the Processing language connected via OSC to the server buzuDados which is running in Pd.
Around the central screen there are 4 small LCD displays connected via the wi-fi Wemos D1 mini microcontroller. The displays show a text label corresponding to the ID number of each bus line highlighted in the map. The main purpose of these visual clues is to provide a reference capable of triggering the recognition of acoustic parameters, and then the emergence of a sounding image of the city.

Parsing the data set
BUZU distributes audio synthesis, image generation, and data analysis tasks to four Raspberry Pi 3 Model B + units over a Local Area Network (LAN). A RPi plays the role of a server, in which the Transport is running, the NuSom dataset is parsed, and the data for sound and image synthesis is generated. The data is sent over the WiFi LAN, using the UDP-OSC implemented in the the Pd netsendobject. The data is sent through the router's broadcast address to the other 3 RPI and the 4 WiFi Wemos D1 mini microprocessors. There are, in sum, seven clients.
The Audio 1-2 and Audio 3-4 RPI units are responsible for quadraphonics, with Audio 1-2 being responsible for the front stereo pair and Audio 3-4 for the stereo rear pair. The Video RPi is responsible for the screen realtime rendering of the bus stop data used in the sonification and finally the Wemos D1 mini are responsible for receiving and displaying the ID of each retrieved bus line.

Analysing the original dataset
The original dataset and its documentation are available in the InterSCity website (Kon 2019) and is well documented (Wen 2018). It Is called "Bus movement model" and consists of a 146 Mbytes file with 8 pairs of XML files intended to be incorporated in the InterSC-Simulator (Santana et al. 2017). It represents trips performed by 2,183 bus lines in São Paulo. The information feeding the dataset came from two sources: GTFS (General Transit Feed Specification) and AVL (Automatic Vehicle Location). The former reflects the service planning by providing data such as bus line code, route, bus stops location and pathway between stops. This open data can be consulted and implemented from the SPtrans public site (Trans 2020). The latter reflects the system's real behavior by supplying data such as departing time, departing frequency and average speed. These data are gathered by GPS devices mounted on each vehicle, and were provided by Scipopulis (Pons 2018) a startup dedicated to process São Paulo transportation data from the Olho Vivo system (Trans 2018).

Creating the NuSom dataset
We created three python scripts with the Pandas library that extract information from the originals buses.xml and maps.xml datasets, as well as from other intermediate files. The scripts transform the gathered data in a new set of files called "NuSom". This data set can be read by the Pd text object.
The first python script, cria_onibus_dia_pdvanilla.py, generates 1 file per day. It interprets the 8 original buses.xml and creates 8 new files. They display GTFS data such as departure interval, start time and bus stop locations. The second script, cria_trajetos_dia_pdvanilla.py, generates 1 file per day. It retrieves data from the map.xml files and creates 8 files. Most of these identifiers correspond to bus lines trajectories, which consists of the first and last bus stop separated by a dash. The third script, cria_coordenadas_dia_pd-vanilla.py, works in a different way. It generates 8 files called "coordenadas" by retrieving information from some files created by us in an earlier phase of the project. These files, called map_id_x_y-latlong, were obtained manually from the map.xml. We got the Lat and Long coordinates converting the original nodes coded in UTM (Universal Transverse Mercator) using an online converter and a flexible text editor.

The buzuDados specs
To navigate our dataset, we created the Pd buzuDados abstraction, which is able to request and retrieve information in real time. When it receives messages to the left inlet, data is brought back by the left outlet. BuzuDados starts working when sending a number ranging from 0 to 7 to the right inlet, which corresponds to the weekday or the Eastern holiday. The right outlet displays a bang after data is retrieved.
BuzuDados accepts different type of messages. When sending a number from 0 to 2.183, It retrieves the string <bus id>, which corresponds to the unique bus line identifier according to the SP trans system. When sending a message with <bus id> and then <start_time>, buzuDados returns a string symbol corresponding to the time in which the bus line should start operating; <bus id> and the string <interval> a list of 24 numbers, each one corresponding to the departure frequency (in seconds) for each of the 24 hours of the day; <bus id> and the string <stops> a list of strings corresponding to the identifiers of each bus line stop. When sending the message <coordinates> and a bus stop identifier, buzuDados retrieves a list of 2 numbers <x> and <y> corresponding to the scaled Lat and Long coordinates of this bus stop. When sending <zona> and then a bus stop identifier, it retrieves a number from 0 to 4 corresponding to São Paulo zones: Downtown, Western, Eastern, Northern or Southern. Lastly, when sending a message with the string <avgspeed> followed by the first and last bus stop identifiers of a line separate by a dash, buzuDados retrieves a list of 24 numbers corresponding of the average speed (in m/s) at each of the 24 hours of this day. The NuSom dataset and the buzuDados abstraction are open and can be downloaded in our GitHub repository (Viveros 2019).

Data-sound mapping
The Buzu audio engine is composed by two synthesizers working simultaneously: the drone and the melody maker machine. The former operates as an acoustic background, a lower spectrum pad sound giving the Figure 7: The help file of the buzuDados Pd abstraction sensation of continuity. It retrieves AVL data. The latter sonifies the route of up to four randomly chosen bus lines, by tracking the path followed at each bus stop. It produces identifiable melodies and retrieves mainly GTFS data. The drone and the melody maker machine receive data from buzuDados abstraction and are driven by the transport device.

Time
Our strategy to manage the run of time in Buzu was using a Transport device, which is widely used in DAW (Digital Audio Workstation) devices, audio recording and editing software. There can be found the play, pause and stop buttons. Once activated, a count is started with intervals of 100 milliseconds showing day, hour, minute, second and millisecond subdivisions. Day and speed changes can be made, which facilitates navigating the dataset over time and enables the user to select the exact moment when the dataset is queried.

The drone Synthesizer
The drone synthesizer makes audible <distance> and <avgspeed> data, referring to the bus line pathway contained in the NuSom dataset. Three overlapping textures were used to generate a contemplative sound structure. The first one is a sort of low frequency drone that establishes a synthetic and timeless acoustic sensation. We include employ also the lfnoise generator and some filters from the Pd Else library (Porres 2017) to imprint a subtle and deep texture. Since the audio system was quadraphonic, we chose the low frequency noise generator with fixed seed as a static sound source between the two pairs of stereo audio speakers. This layer is independent and autonomous from the dataset. The second texture is inspired by the Risset cascade of arpeggios (Risset 2003). It implements additive synthesis to generate states of restlessness and relaxation by manipulating the internal beat rate of the sound spectrum according to the average speed of the bus line at a given time of the day. Lastly, there is a noise texture whose morphology is dependent on the average speed of each bus line. It is heard when invalid data is retrieved. In the InterSCity dataset, the <avgspeed> data with a value of -1 indicates that some data sampling error occurred (failure to transmit, receive or even fail to trigger the GPS), in the NuSom dataset this value. Thus, by having an average speed in the path of -1, the noisy texture will be activated. It will generate a grainy random texture and can refer to an idea of analog noise, directly signaling a failure in the original dataset.

The Melody Maker
The melody maker machine is a four-voice polyphonic FM synthesizer attached to a resonant filter. It uses the Pd bob object and the spectral delay by Barnecht (2012). A dynamic ADSR controller drives the synth by receiving pulses from the Transport device. The synthesizer has four parameters (pitch, cutoff frequency, resonance, and delay feedback) and four outputs envisaging the quadraphonic arrangement.
The synth retrieves the <coordinates> data, corresponding to the GPS location of each bus stop visited on a trip, and <interval>, corresponding to the number of departures per line at each hour of the day. The former was mapped to spatial data by producing an acoustical matrix of 1000 x 1000 values, the latter to the delay feedback parameter. While our goal was creating an acoustical image of the city, our task was to emphasize a sense of direction to each cardinal point rather than an exact location, by making evident direction changes. Thus, we assume a divergent mapping or many-to-one technique, where "...objects usually change their sound characteristics in several aspects at the same time when varying" (Grond F. 2011, p 370). In this regard, the <x> and <y> coordinates of each visited bus stop were mapped in two different ways.
On the one hand, the coordinate was scaled and connected to the cutoff frequency and, in reverse order, to the Resonance parameter. The coordinate, connected to the pitch value, was scaled, and redirected to avoid chromatic relations by selecting just notes of the Cm pentatonic scale. On the other hand, the coordinates where connected to the audio output levels, taking advantage of the quadraphonic speaker system to recreate the cardinal directions in the installation space. The east-west axe was associated with the <x> coordinate, and the north-south axe to the <y> one).

Conclusions
By taking advantage of sonification techniques BUZU offers a poetic experience in the perception of the run of time by contemplating urban traffic. While visiting the installation, it is possible to experience a kind of poetic  (2018) enchantment by contemplating the complexity of the system. At the same time, an image of the city emerges when the visitor realizes the relation between sound and the dynamic map. In this regard, the project goal was completed, since our intention was to generate an alternative view of Sao Paulo by retrieving data from the bus transportation system However, although the installation worked very well on its public exhibitions and the buzuDados abstraction that retrieves data from NuSom dataset was successfully implemented, we can envisage some adjustments. One of them is concerned with the feature capable of creating interruptions in the calls made to the dataset. In addition, we plan to implement internal messages in the control module of the buzuDados data flow. It will enable the expansion and maintenance of data parsing features in a more robust way, facilitating the abstraction implementation by third parties.
Although buzuDados completes specific operations for BUZU, it deals with processes relevant for other sound designers concerned with big data sonification. buzuDados makes feasible for other artists and experimenters the implementation of the InterSCity dataset in Pd. It is expected to create alternative sonifications of the NuSom dataset using the buzuDados abstraction in collaboration with other members of the research group. Furthermore, a closer collaboration with InterSCity members is also planned regarding the implementation and use of other datasets to develop new artistic and technical works.

Acknowledgments
This research is part of the InterSCity project sponsored by the INCT of the Future Internet for Smart Cities funded by CNPq, proc. 465446/2014-0, Coordenação de Aperfeiçoamento de Pessoal de Nível Superior -Brasil (CAPES) -Finance Code 001, and FAPESP, procs. 2014/50937-1 and 2015/24485-9. In addition to thanking Prof. Fabio Kon, coordinator of InterSCity, we want to acknowledge the collaboration and friendship of NuSom members who went to great lengths to produce the Sons de Silício exhibition. We also want to mention the fundamental support of the Centro Universitário Maria Antonia that hosted the exhibition and offered