Skip to content

Testing Times…

As our research project subjective testing looms later this week, it seemed like a good opportunity to post a progress update, and also promote some other real world productions I’ve recently been working on.

This Thursday afternoon and evening, Ash and I will be conducting our subjective listening tests in one of the studios at MCUK. We’re now at the stage where we have a 10-attribute assessment, and participants will use iPads to enter their choices while listening to the test material. For each attribute being tested we are using appropriate audio, which should stress both systems. Each audio excerpt is about 15 seconds and will be played twice – for system A and then system B – so each test ‘sitting’ shouldn’t take more than 12-14 minutes with gaps in between questions for any comments to be added. The test questionnaire was put together using ‘Google Forms’, so all responses are automatically recorded to a spreadsheet where we can later analyze the data. Additionally, using iPads for participant response means we can test in low light, negating the need to screen off the two systems from view, which could introduce bias in the results.

Prior to testing on Thursday, we plan to spend Wednesday evening setting up the listening environment and systems for optimum accuracy. The 5.1 system will be set up as per ITU-R BS.775, with all listeners seated within the accepted boundaries. The Bose Sound Bar will be calibrated to the room size using the ‘AdaptIQ’ headset, and both system levels will be calibrated to listening volume of 85dB SPL. The Wednesday session will also allow time for running some pilot tests to iron out any potential issues with equipment, material and test format.

If you are free to take part in our subjective listening tests on Thursday afternoon from 5.30pm then please get in touch on 3khzstudio@gmail.com or DM me @3KHz on Twitter.

After family, University studies have taken up the majority of my time lately (aside from cycling and MTB’ing!) and I’ve not been as busy with audio projects. That said, I’ve managed to squeeze in a few interesting things…

Earlier this year I did a 5-day tracking session with Cavan Moran. I then spent a week or so mixing and mastering what was to become his “Five Simple Crimes” EP, which was launched to some great reviews at Takk in the northern quarter last month.  Cav’s songwriting is exceptional because it’s so believable. Marry that with simple, steady guitar lines and the accompanying harmonies and piano of Emma Whitworth and the result was something very special and timeless. Here’s a quick one-take live recording we did during the session of the EP’s opening track.

The session was recorded at 48k/24bit straight into ProTools. We mainly used an x/y pair of Neumann KM184’s for Cav’s acoustic guitar into BAE1073 Neve clones, and an ’87 for all of his vocals through a UA6176 pre. We double tracked Emma’s BV’s (again, using the ’87) and pushed them very far away with a tonne of reverb, giving a a very haunting effect. Vocal timing for harmonies was tightened up using Synchro Arts Vocalign plug-in. We experimented with different mics and pres for the piano, but settled on a pair of AKG414’s (HPF’d above 80Hz in Omni) into a Focusrite ISA428 Mk1, which gave an open an detailed sound, as my piano is not the brightest! The piano was miked on the soundboard, with the mics about 3′ apart and 2′ away. We used acoustic absorbers behind the mics to reduce sound reflected from the nearby plastered walls. As for guitars used, If my memory serves me right the majority of the picked parts were played on a Martin OM, with any strummed arrangements on a pretty old, beat up Gibson J50!

Post-production-wise I’ve recently completed the editing, mixing and mastering of 13 tracks for a musical film called “Ordinary world”. This was a tough job with tight deadlines, but ultimately very rewarding. As the film is due to premiere at The Cornerhouse in the new year I’m unable to post any clips just yet, as finishing touches are being applied prior to an ‘official’ trailer release. However, you can listen to one of tracks here…

Before I came on-board, all the arrangements and recording of the songs was done in GarageBand and logic using a combination of virtual instruments and live musicians. I was then dropboxed stereo files of each element to work on. The first thing I did was split pretty much everything into mono and ditch what I didn’t need. I then strip-silenced everything to get more clarity and see what was happening and where. Once I’d got a rough idea of the balance I wanted, I then set about cleaning up the vocals which were very noisy and all over the place in level. Waves “vocal rider” helped a lot with this then a touch of compression. Some quite severe EQ decisions were made for certain midi instruments, with those settings then saved and re-used on the same instrument in another song. This saved time and was a good starting point at getting some life into virtual instruments and helping them to sit right. In order to get everything to a similar level and minimise the need for difficult mastering, I mixed into the same signal chain on my main stereo bus: TL Audio EQ2, Waves Bus Compressor, Sonnox Limiter and finally a McDSP ML4000 mastering limiter. All mixes were dropboxed back to the client who would feedback to me via mix-notes in the relevant song folder. All in all, I spent about 2 weeks working on the project and aside from one meeting at completion, only saw the client once! Dropbox meant I didn’t need to leave the house…!

Spreadable or Sell-able Media?

Image

Twitter staff applauding the do-nut towers consumed to mark their floatation

(From Campbell & Pow, 2013)

Last week saw Twitter being brought to the Market with a proposed listing value of $12.8bn as it’s shareholders looked cash in on the internet business probably most responsible for the growth of ‘spreadable media’. However, this idea of linking people up and bringing them together is a far-cry from the original thinking behind Twitter, which was developed as means to keep us online and spending money by “people with little sense of politics and even less of culture” (Appleyard, 2013). The people in question – Twitter’s key founders, Jack Dorsey and Evan Williams – initially had different views on the purpose of their application, with Williams’ ideal of a general interest in the world (“What’s happening?”) contrasting with Dorsey’s vision of a more narcissistic stream of our own human conditions. Unsurprisingly this acorn of animosity between the two grew into an intense personal hatred as the company ballooned. After it’s inception in 2006 Twitter took time to get a foothold and it wasn’t until 2008 that it finally ‘arrived’ with 400m tweets being posted that year, 50m per day by 2010, and then 340m tweets per day coming from it’s 500m registered users in 2012 (Wikipedia, 2013). The result is what we know today: a hybrid of Dorsey’s and Williams’ vision blending one’s self and more importantly ‘News’ that – on it’s first day of trading – sold shares 73% up, implying that the business had been undervalued to the tune of $1bn (Chakrabortty, 2013). So, where does this astronomical valuation come from for a company that is yet to post a profit? I guess that the answer to that lies partly in the battleground for spreadable media, as corporate thinking and business models (like Web 2.0) attempt to ‘commodify’ the participatory culture of the Internet (MIT, 2013), and in particular social media.

Spreadable media is an all-encompassing term for content that spreads across the Internet via circulation as opposed to distribution. The latter – according to Jenkins, Ford and Green (MIT, 2013) – is a more (usually corporately) controlled and regulated ‘top down’ system. It is what ‘The Man’ wants to tell ‘Us’. Circulation is referred to as a ‘hybrid system’ which – while including original content – is predominantly regulated material which is then being freely shared across the internet at a grass-roots level… from the ‘bottom up’. This ‘unauthorized’ (often adulterated!) passing along of content is at the core of spreadable media, and while being both uncontrollable and unpredictable generates meaning and value in an internet-based culture. This is where the $billion’s come in. Remember that Twitter valuation?

So what exactly is spreadable media? Well, essentially anything in the digital domain of the Internet is spreadable, but – aside from top down marketed content which is initially distributed and then circulated – the types of spreadable media of most interest culturally are memes, remixes, mashups and Supercuts which via their virus-like proliferation allow circulators to “affirm their commonality” (Jenkins, et al 2009), give a sense of ‘community’ and in many cases give people access to news, ideas, movements, and politics that formerly they were not privy to. Most importantly, spreadable media is at the heart of participatory culture and ever burgeoning online communities, as it gives its’ users just that: a sense of community. It facilitates the making of social relationships where contacts (friends?) are made, creativity is sparked and a feeling of being part of ‘something bigger’ is nurtured. “Hacktivism” and the growth of the Anonymous movement is an example of this, where an initially small group of – let’s face it, Geeks – came together in a common cause, spreading media and information and growing collectively in opposition to the people out there who wanted to control and censor the Internet. Self-termed the “final boss of the Internet” these “nameless, faceless punks were having a geopolitical impact” (T3Combat, 2013). Their notoriety negates the need to tell their full story, but I would suggest watching this for a great overview of their meteoric rise and impact on the world – in particular their attack on Scientology, how they helped with the situation in Egypt and their orchestration of the ‘Occupy’ movement.

Political themes are a common driver in spreadable media, but others include news items, popular and contemporary culture and nearly always humour. Indeed, a formula for spreadability has been coined “S = C+L”, where ‘S’ is spreadbility and ‘C’ and ‘L’ are current affairs and ‘LOLz’ respectively. Jenkins et al (2009) state that spreadable media contains “absurd humour or parody” while at the same time expressing “themes of the community”.

Nearly all examples of spreadable media allow the content generators and sharers to express their creativity, whether by remixing an existing idea, mashing-up several, supercutting a body of work up to it’s ‘nth’ degree or creating a new idea in the case of memes. The last of these is interesting as they are perhaps the most virus-like (a la Dawkins), spreading from one person to the next, proliferating and then dying. Memes are quick to enjoy, easy to absorb and therefore very spreadable, becoming increasingly popular and recognizable. Collectively we’ve developed “a kind of meme literacy” (Termine, 2012).

BY5I-n-IEAEmaWj.jpg_large

My own current research is neither funny nor featuring in world news or current affairs, so creating directly related spreadable media to promote it is difficult. However, driving traffic to my blog by generating random content (which includes my blog and twitter details) could be used as a tool to draw people in to the more serious side of my internet presence. Therefore I’ve been trying my arm at different approaches and creating a variety of content with the ultimate aim of getting people to read my blog. I’ve tried a cat-based “I can has” meme (which are hugely popular) on a previous post, tweeted a more political/mildly funny meme of the deputy Prime Minister (above), posted a remix of his boss on YouTube, done a few ‘sympathy tweets’ trying raise awareness and support for Typhoon victims in The Philippines (tapping into a growing consciousness in “virtual kindness”), and most recently had a stab at supercutting the most popular TV show on BBC2. After the laborious task of downloading and converting the source clips, this was followed by the painstaking task of trawling through them all and editing out the bits I wanted, which were then rendered in a completely random order. The video was uploaded to YouTube, and links were tweeted, retweeted, posted to the supercut website and onto Reddit. Over the weekend I’ll get somebody to post it on Facebook (I’m a conscientious objector) and then hopefully in a month or so I can look at how far it spreads.

Finally, let’s get back to the Twitter valuation. In the documentation produced for its’ floatation it claims, “Our success depends on our ability to provide users of our products and services with valuable content, which in turn depends on the content contributed by our users”. What Twitter recognizes is that without ‘us’ it amounts to very little, and by selling us to us it’s founders become billionaires and many of it’s staff millionaires…

References:

Appleyard, B. 2013. The tricks and the tweets. The Sunday Times Culture, 10th November 2013, p.38.

Campbell, P & Pow, H. 2013. The Mail [Online]. Retrieved 13th Nove,ber 2013, from http://www.dailymail.co.uk/news/article-2489994/Twitter-HQ-celebrates-faux-Cronuts-day-trading-ends-44-90-share-making-founders-multi-millionaires-valuing-company-31-billion.html

Chakrabortty, A. 2013. The Guardian [online]. Retrieved, 13th November 2013, from http://www.theguardian.com/commentisfree/2013/nov/11/twitter-ipo-wrong-people-money

Jenkins, H. Li, X. Krauskopf, A.D. Green, J. 2009. If it doesn’t spread it’s dead: Creating value in a spreadable marketplace. Retrieved 12th November 2013, from http://convergenceculture.org/research/Spreadability_doublesidedprint_final_063009.pdf

MIT. 2013. Spreadable Media: Creating Value in a Networked Society. Retrieved 11th November 2013, from http://video.mit.edu/watch/spreadable-media-creating-value-and-meaning-in-a-networked-society-8705/

T3Combat. 2013. Anonymous: How Hackers changed the world. Retrieved 14th November 2013, from http://www.youtube.com/watch?v=d-d5TDHa8jw

Wikipedia. 2013. Twitter. Retrieved 12th November 2013, from http://en.wikipedia.org/wiki/Twitter

Termine, R. 2012. What makes a meme? Retrieved 14th November 2013, from http://www.salon.com/2012/10/28/what_makes_a_meme/

Image

When headphones just won’t do…

can i has surround sound?

Research Project Update

Fuelled by beer and noodles Ash and I met up last night to discuss our proposed research project and start refining our question and approach. Below is a quick summary…

After an extensive literature review we’ve decided to adapt what we are looking at and how. Initial ideas were based around evaluating different surround-sound codecs using either BS1116 (ITU, 1997), or perhaps looking at a range of formats and assessing differences using BS1534 (ITU, 2003). Suffice to say there’s already a lot of research and subjective testing in these areas.

When originally considering formats, we both agreed to include the new Bose sound-bar in any testing. While not strictly a surround-sound system per se, the sound-bar is marketed as an immersive sound experience, and the linear ‘array’ has 5 speakers enclosed and a separate sub-woofer. 5.1 systems have been used in the consumer domain for some time now, and now the sound-bar is attempting to grab a share of this market.

This got us thinking about just how good is it? The logical answer here was to put it up against a 5.1 system in controlled conditions and conduct a comparative subjective evaluation. Bose – for whatever reason they see fit! – appear reluctant to release any technical data regarding either the build of, or algorithms at work within, their product. The apparent lack of subjective research in the field was also a factor in guiding us towards our decision. The research project title has now become… 

“Sound-bar technology versus 5.1: a subjective evaluation of perceived audio differences”.

In terms of methodology we rejected the frameworks of BS1116 & BS1534 as they are too linear, and are concerned mainly with the 1-dimensional concept of quality across different codecs. Their stringent guidelines also focus on expert listeners detecting small differences. We needed a more flexible approach and as result decided that BS1284 (ITU, 2003) was the most suitable framework within which to conduct our subjective test, as “more general assessments usually involve larger differences and therefore do not usually need such close control of test parameters” (ITU, 2003). EBU technical document 3286 (EBU, 1997) was also considered as the data collection was very transparent and subsequent ‘radar graphs’ depicting quality attribute results easy to understand, but the paper stipulates that only classical or acoustic music recorded in a real space can be assessed. As sound-bars and 5.1 systems are used for a variety of audio re-production, this didn’t appear the best fit as it limited our choice of programme material.

In accordance with the recommendation we aim to have either ten expert or twenty non-expert listeners who will be familiarized with the test procedure, environment and material prior to commencing.

As we are currently proposing to use a ‘triangle’ test for comparison, our audio excerpts will employ the 7-grade comparison scale as set out in the recommendation, and our programme material(s) will be selected to “stress the system”, these being a surround-sound recording of classical music, a 5.1 music mix of an appropriate studio recording and audio from a film. We intend to source our programme material from commercially available SACD and either Blu-ray or DVD.

In terms of ‘audio differences’ we plan to finalize our list of attributes at our next meeting, where we will also design our subject input sheet for recording their scores and any comments, which will serve as anecdotal evidence. BS1284 stipulates 7 main attributes to test, each of which has a family of sub-attributes should we wish to drill down. Attribute selection will be confirmed when programme material is decided upon, and subjects will receive an explanation of each attribute and be given an audio example prior to testing. Some examples of main attributes are spatial impression, stereo impression, transparency, sound balance and freedom from noise.

In terms of technical aspects, we propose to conduct the listening tests in Newton, in a room adhering to the standards set out in BS1116 (as defined by BS1284) with the 5.1 speakers configured as per BS775-3 (ITU, 2012). The Bose sound-bar will be set up according to manufacturers instructions and self-calibrated to the size of the room using test-tones.

References:

ITU. 1997. Recommendation ITU-R BS.1116: Methods for the subjective assessment of small impairments in audio systems including multichannel sound systems. Retrieved 6th November 2013, from http://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1116-1-199710-I!!PDF-E.pdf

ITU. 2003. Recommendation ITU-R BS.1534: Method for the subjective assessment of intermediate quality level of coding systems.  Retrieved 13th October 2013, from http://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1534-1-200301-I!!PDF-E.pdf

ITU. 2003. Recommendation ITU-R BS.1284: General methods for the subjective assessment of sound quality. Retrieved 6th November 2013, from http://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1284-1-200312-I!!PDF-E.pdf

ITU. 2012. Recommendation ITU-R BS.775-3: Multichannel stereophonic sound system with and without accompanying picture. Retrieved 12th October 2013, from http://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.775-3-201208-I!!PDF-E.pdf

EBU. 1997. Technical Document 3286: Assessment methods for the subjective evaluation of the quality of sound programme material – Music. Retrieved 6th November 2013, from http://tech.ebu.ch/docs/tech/tech3286.pdf

The Encyclopedic Palace: vision of a networked world?

On a recent trip to Venice I visited the 55th Biennial Arts Festival (“La Biennale”) in search of a bit of culture and as a break from work, university and the rain. I’d been doing quite a bit of reading about the future of learning, open education and shared knowledge, which is why I wanted to write this brief post. The theme of the exhibition was uncannily similar to the paradigm of a newtorked world, where all knowledge is shared and available…

From La Biennale website:

This year’s festival is entitled “Il Palazzo Enciclopedico” and draws inspiration from the model of a utopian dream by Marino Auriti who filed a design with the U.S. Patent office in 1955, depicting his Palazzo Enciclopedico, an imaginary museum that was meant to house all worldly knowledge. Auriti planned the model of a 136-story building to be built in Washington, that would stand 700 meters tall and take up over 16 blocks.

encyclopedic-palace-venice-biennale-2013-palazzo-enciclopedico-marino-auriti-cover

“Auriti’s plan was never carried out, of course”, festival curator Massimiliano Gioni says, “but the dream of universal, all-embracing knowledge crops up throughout the history of art and humanity, as one that eccentrics like Auriti share with many other artists, writers, scientists, and self-proclaimed prophets who have tried—often in vain—to fashion an image of the world that will capture its infinite variety and richness. Today, as we grapple with a constant flood of information, such attempts to structure knowledge into all-inclusive systems seem even more necessary and even more desperate.”
“Blurring the line between professional artists and amateurs, outsiders and insiders, the exhibition takes an anthropological approach to the study of images, focusing in particular on the realms of the imaginary and the functions of the imagination. What room is left for internal images—for dreams, hallucinations and visions—in an era besieged by external ones? And what is the point of creating an image of the world when the world itself has become increasingly like an image?” (La Biennale, 2013).
encyclopedic-palace-of-the-world-venice-biennale-2013-palazzo-enciclopedico-marino-auriti
Marino with his models of The Palace (from http://forte-e-gentile.blogspot.it/2012/02/io-vivo-encyclopedic-palace-rises-again.html)
I can’t help but think that Marino just might have been onto something….
You can read more about Marino here in a blog by his grand-daughter.
Reference:
La Biennale. 2013. 55th International Art Exhibition – The Encyclopedic Palace. Retrieved, 5th November 2013 from http://www.labiennale.org/en/art/news/13-03.html

“Copy, Transform, Combine…”

Image

 From Popova,  2013.

In an earlier post I mentioned I’d been watching the Martin Weller YouTube playlist ‘Understanding OER in 10 Videos‘, as one-stop introduction to open educational resources. One of the most engaging items was Kirby Ferguson’s excellent ‘Everything is a Remix‘ which – in series of 4 short films – explains his view that creativity is all about copying, combining and transforming.

The film charts the origin of the term itself from the sample-heavy remixes of Hip-Hop music in the 1980’s, through to the current state of affairs in a world where the concept of ‘intellectual property’ reigns and litigation is just one hindrance in our social evolution. On the way we learn how the first remixes were already happening before the phrase was even coined, as people like William Boroughs employed a ‘cut-up technique’ to his writing of ‘The Soft Machine’ and musicians, including Led Zeppelin, plied their trade often in the realms of ‘legal remixing’ through ‘knock-offs’ and cover versions (Ferguson, 2012[a]).

In a related TedTalk Ferguson continues this theme using Bob Dylan as an example, claiming that two-thirds of Mr. Zimmerman’s melodies were copied from his folk predecessors. The creative ‘genius’ however, was in taking (copying) these melodies, transforming them, and combining them with new lyrics. Another more contemporary example is Danger Mouse’s ‘Grey Album‘ which was a remix of The Beatle’s White Album and Jay-Z’s Black Album. The remix became an Internet sensation but resulted in a stream of ‘cease and desist’ orders from EMI who owned The Beatle’s copyright claiming “unfair competition of [our] valuable property” (2012[b]). EMI’s corporate stance was in contrast to Jay-Z’s, and although the capella versions of his songs were copyrighted, they were, “released for the implicit purpose of encouraging mashups and remixes” (Wikipedia, 2013).

Image

From Hodder, 2004.

Ferguson’s film is filled with compelling evidence that most of what we perceive as new or original is probably a remix of some kind, with box office films being a case in point. If we take the Top Ten highest grossing films per year, each year from 2002-2012, seventy-four of those one hundred films are either remakes, sequels, or adaptations from a graphic novels, video games or books (Ferguson, 2012[a]). The idea of becoming a reference point by drawing from them is further illustrated by the ‘monomyth’ Star Wars example. Lucas’ space classic directly references early sci-fi films which preceded it, as well as copying Japanese Samurai films, combining them with themes from old western and war movies, and transforming them with new technology and fresh dialogue. An example of creativity coming from without, not from within. The concept of learning via copying is a theme Ferguson is keen to get across, and that we struggle to produce anything new “until we’re fluent in the language of our domain”.

So. Where does all this lead? Well, in Ferguson’s view we have to copy to “gain knowledge and understanding”. There is also a strongly held view that existing ideas can be transformed into something new, and that dramatic developments can occur when ideas combine. Referring back to an earlier post, which talked about the importance of the printing press in the 15th century (akin to The Internet in the modern age), all the components had been around for centuries, but it was their combination in about 1440, which led to the ‘breakthrough’ and the invention that we now refer to as the printing press.

So far, so positive? All this shared knowledge for ‘The Common Good’, and it’s impact culturally and effect on social evolution is a good thing, right? If only…

The rise and rise of Remixing (in particular copying) has been paralleled by the prevalence of the concept of ‘intellectual property’, and The Common Good has been hijacked by market forces and loss aversion. Copyright and patent laws intended to “promote the progress of useful acts” are doing exactly the opposite as litigation increases.  The ‘competition’ between computer companies and more recently mobile phone manufacturers highlights this, with BILLIONS of £’s being wasted on lawsuits every year. Lessig (2008) amongst other things calls for the “decriminalization of file-sharing” which could increase creativity in a remix culture while at the same pulling the rug from under the feet of opportunistic litigators.

Image

From ReadWrite, 2013.

My own first attempt at a “remix” of ideas was to take footage from a TedTalk by David Cameron, and by (super)cutting in other footage and images reverse the message and offer the alternative viewpoint. The dialogue and narrative of the talk were removed and replaced with edited audio from other sources and the clip was given a ‘suitable’ soundtrack in keeping with the message. What was originally a piece of conservative party spiel about the ‘next age of government’ soon became a short Super-cut with an strong anti-government theme. Like most remixing this required more time than skill, and within the space of a few hours the edit was done and uploaded. A new message was created from a seemingly straightforward video and – at the click of a mouse – distributed globally.

Links: 

Lawrence Lessig’s “Remix: Making art and commerce thrive in the hybrid economy”.

References:

Ferguson, K. 2012[a]. Everything is a Remix. Retrieved 4th November 2013, from http://www.youtube.com/watch?v=coGpmA4saEk

Ferguson, K. 2012[b]. Embracing the Remix. Retrieved 4th November 2013, from http://www.youtube.com/watch?v=zd-dqUuvLk4

Lessig, L. 2008. Remix: making art and commerce thrive in the hybrid economy. Retrieved, 4th November 2013, from http://www.scribd.com/doc/47089238/Remix

Hodder, l. 2004. Yet Another Copyright / Remix Culture Struggle With a Mouse or Why I Get Whiplash Thinking About the Disney Dichotomy. Retrieved 4th November 2013, from http://napsterization.org/stories/archives/000170.html

Popova, M. 2013. How Remix Culture Fuels Creativity & Invention: Kirby Ferguson at TED. Retrieved 4th November 2013, from http://www.brainpickings.org/index.php/2012/08/14/kirby-ferguson-ted/

ReadWrite. 2013. The Mobile Patent Wars: Are We Ready for This to Go Thermonuclear? Retrieved 4th November 2013, from http://readwrite.com/2012/02/14/the-mobile-patent-wars-are-we#awesm=~omeaNboIeLpVRd

Wikipedia. 2013. The Grey Album. Retrieved 4th November 2013 from, http://en.wikipedia.org/wiki/The_Grey_Album

Small world eh?

The idea of using the Internet for collaborative projects has been kicking about for quite a while now. What may have started with cumbersome email communications and long-winded FTP’ing for file sharing has since been streamlined with a whole heap of developments. For some years I’ve been using both Google Drive and Dropbox for sharing large files, particularly on audio mixing and mastering projects. It’s so much easier to put versions of mixes in a Dropbox folder and await feedback from the client, rather than burn them onto CD and pop them in the post! Since version 10.3 my DAW of choice ProTools has had the ‘share with Gobbler or Soundcloud’ feature making it even easier for individuals to work collaboratively on big sessions (Sound on Sound, 2013). Google’s Calendar feature can also be a useful tool by organizing timelines for working groups and keeping collaborative projects on track.

Collaborative mapping takes the idea of working together to the next level and is “an initiative to collectively create models of real-world locations online, that people can then access and use to virtually annotate locations in space” (Gillavry, 2013). They enable us to create a virtual reality made up of points of interest relevant to our professional field or area of academic research. A quick introduction to collaborative mapping by Eyal Sela (2009) can be found here. Perhaps most significantly Sela adds that the disparate nature of Internet-based global working can be eased with “shared, collaborative maps [which] can improve the perception of proximity by creating a visualization of all the team members’ location”.  With everything in one place the world just got smaller.

A collaborative map can be a vehicle for getting your own research ‘out there’ and also a tool for hooking up with other researchers and professionals in your field or other disciplines. The idea being that it nurtures “processes and methods that integrate people, spatial data, exploratory tools, and structured discussions for planning, problem-solving and decision-making” (Balram & Dragićević, 2006). As my own project grows a collaborative map with points of interest including people researching the same area (both students & teachers), participants for listening tests, locations for testing and leading academics in the field would provide a one-stop visual map of everything relevant to my study.

As part of the #iCollab community, group members are able to visualize each other’s location on a world map and zoom in, instantly accessing contact information, blog addresses, LinkedIn details, areas of study, etc. The whole idea is one of sharing – either collaboration in the same field, or cross discipline hook-ups. There are no longer boundaries, as the Internet, mapping tools and social media have rubbed them all out.  We are only ever a few clicks away from another researcher, academic or professional anywhere in the world who may have shared interests and want to collaborate. The way we are learning is changing and the possibilities are endless.

Below: Google map showing the members and locations of the #iCollab community

Image

Variation on a theme: The future of learning in a networked society

Links:

Online Maps: 50+ Tools and Resources

Robin Good’s best online collaboration tools

References:

Balram, S & Dragićević, S. 2006. Collaborative Geographic Information Systems. Retrieved 26th October 2013, from http://books.google.co.uk/books?hl=en&lr=&id=mHIU5BlYTCwC&oi=fnd&pg=PP2&dq=collaborative+mapping&ots=ng1muAs4bk&sig=4q8A6sccvMQCMulgUx4uKXNNfWw#v=onepage&q=collaborative%20mapping&f=false

Gillavy, E. 2013. Collaborative mapping. Retrieved 27th October 2013, from http://www.webmapper.net/carto2003/

Sela, E. 2009. How To Create Shared Collaborative Google Maps. Retrieved 28th October 2013, from http://www.makeuseof.com/tag/how-to-create-shared-collaborative-google-maps/

Sound on Sound. 2013. A Project Shared…Pro Tools Tips & Techniques. Retrieved 28th October 2013, from http://www.soundonsound.com/sos/apr13/articles/pt-0413.htm

A short film about Research 2.0

A recent university project entailed each student making a short film using only our mobile phones and available apps. My previous film-making experience was pretty much zero, with most of it being accidental, i.e. my phone being set to the ‘video’ mode when I wanted photos. We’ve all been there…

What initially seemed a daunting task in the end turned out to be a fun and creative process, as I developed an initial idea, got to grips with phone functions and apps, taught myself how to edit with iMovie and sourced freely available and Creative Commons (CC) music to provide a suitable soundtrack. The size of smartphones make them ideal for quick and easy film-making, and I wanted to get a mix of shaky hand-held and ‘Point of View’ (PoV) shots along with more detailed static HD shots captured using improvised tripods.

For the film subject I decided to follow up on an earlier post and continue the theme of Open Education and ‘Research 2.0’ and try to highlight in just over a minute how the internet has changed the way we learn, research, teach and share knowledge. It’s a big subject, but using different apps and music for each ‘half’ of the film I was able to create a distinct contrast… a kind of ‘before and after’ scenario highlighting the huge difference the internet has made in the way we educate and are educated.

I wanted to exaggerate the fact that before the internet – perhaps with the exception of class/tutor contact time – research was a somewhat lonely pursuit, with age-old practices of isolated reading and note-taking, hours lost in libraries studying papers and books, and perhaps a phone call or two if you were lucky. To convey this outdated approach I went for very dated look and made use of the ‘1920s’ setting on the ‘8mm Vintage Camera’ app (Nexvio, 2012). This same app was used by Malik Bendjelloul to complete his 2012 Oscar-winning film “Searching for Sugarman” (IMDb, 2012). Period-correct music to suit the black and white grainy footage was freely obtained from the ‘Internet Archive’, founded to build an internet library and “offering permanent access for researchers, historians, scholars, people with disabilities, and the general public to historical collections that exist in digital format” (Internet Archive, 2001). The shaky silent movie style text ‘cards’ were created by simply typing into MS word and then filming my computer screen with the 8mm app on an iPhone. You can see the cursor flashing on some of the shots!

This contrasts with static, crisp red lettering of ‘EVERYTHING’ as a post-internet take on research unfolds, with a musical transition to more contemporary electronic dance music courtesy of the ‘Free Music Archive’ (FMA) whose page title informs us, “Creative Commons: Share, Remix, Reuse” (FMA, 2013). The HD footage of this sequence is in stark contrast to the low-resoluation footage that which precedes it, with the pace of the film mirroring the increased speed at which research may take place as we are provided with more flexible tools at our disposal. The time-lapsed video sequences were created using the OSnap! app (Cegnar, 2013) and accelerate everything. A lot like the internet. I wanted to create an exaggerated feeling of speed and activity which was mainly happening in front of a computer screen, as google, twitter, blogs, blackboard, email, online libraries, music software and films were all opened in multiple windows. This is an attempt to communicate the fact that learning now operates this way: books and papers can now be accessed online; we can communicate using email or applications like Skype and FaceTime; we can blog about our research and follow others with similar interests; we can tweet about what we’re doing and tailor our twitter feeds enabling us to streamline our ‘news’ based around our own interests; Google can lead us to places never before possible, with Google Scholar a useful extension for the academically inclined browsers out there.

Please feel free to share, remix and reuse.

References:

Cegnar, J. 2013. OSnap! Time-Lapse and Stop Motion. (IPhone application). Vers. 2.9.2. Available from Apple Application Store : < https://itunes.apple.com/gb/app/osnap!-time-lapse-stop-motion/id457402095?mt=8 >.

Free Music Archive. 2013. Creative Commons: Share, Remix, Reuse. Retrieved 26th October 2013, from http://freemusicarchive.org/curator/creative_commons

IMDb. 2012. Searching for Sugarman. Retrieved 27th October 2013, from http://www.imdb.com/title/tt2125608/

Nexvio Inc. 2012. 8mm Vintage Camera. (IPhone application). Vers. 1.8. Available from Apple Application Store : < https://itunes.apple.com/gb/app/8mm-vintage-camera/id406541444 >.

The Internet Archive. 2001. About the Internet Archive. Retrieved 26th October 2013, from https://archive.org/about/

Research In Emerging Technologies: A Project Outline

As part of an earlier project on my MSc in Audio Production I spent some time looking at spatial audio, with the emphasis on surround sound recording techniques. Rumsey’s “Spatial Audio” (2001) is a great introduction to the subject, as is the paper by Kassier, et al (2005) that offers an informal, yet detailed comparison of available recording techniques.

This semester as part of our research module “The Dr.” and I are continuing with the theme of multichannel audio, but with the focus now on it’s reproduction, as we plan to investigate the spatial capabilities of surround sound formats.

At this stage we are still honing our specific research question, and as this is a very iterative process exact details and approaches are likely to change over the coming few weeks as background literature is reviewed and testing methods are fine-tuned. However, the basic line of the research is looking something like this…

“Is there a perceivable difference in spatial impression across surround sound formats?”

We aim to select five different formats and – using ITU-R BS.775-3 (ITU, 2012) as a reference for system setup – conduct subjective tests of approximately 20 participants in a critical listening environment, as outlined by Rumsey (2001). MUSHRA testing as defined by ITU-R BS.1534 will form the basis of the subjective assessment, as this is the most appropriate for investigating perceived spatial impression of intermediate quality audio (ITU, 2003). Prior to the listening tests, we plan to crowd-source participants by distributing an online questionnaire where gender, age and occupation data can be collected and triangulated with our subjective test results. Exact details of the questionnaire and listening test format are still to be finalised. We are also considering the logistics of participants undergoing to a basic hearing test before they can take part in the subjective tests to ensure they have an acceptable level of hearing. ITU-R recommendations stipulate that participants must be screened or ‘expert’ listeners (Bech & Zacharov, 2006). One hopes that good auditory health will assist in both the speed and efficiency of the testing and the accuracy of the results…! In order to get a more quantitative data set, we may also look at a ‘black-box’ or ‘dummy head’ testing of the different systems, and triangulate this with the results of the subjective listening tests.

Image

ITU-R BS.775-3 showing 5.1 and 7.1 speaker configurations.

Source: http://www.lsi.usp.br/interativos/nem/audience/index_eng.html

On a practical note, we have time-lined our research plan up until the date our findings will be presented, taking into account key areas such as literature reviews, questionnaire design and distribution, setting up and conducting listening tests, and statistical analysis of data. Google Calendar provides us with a shared diary to track where we are and what we need to be doing, with dropbox a shared repository for storage and organization of relevant literature and documents we are working on together. Useful web resources are bookmarked and shared on Diigo.com.

With this research project only in it’s infancy, we know roughly what we want look into, and ways to approach it, but we don’t have a hypothesis. However, despite the view of Bech and Zacharov (2006), I’m not entirely sure we need one. We can’t speculate as to whether one system is better than another, if DTS is preferred to Dolby Digital, or if SDDS is the most “immersive”. All we can do is design and conduct the tests as consistently as possible and let the participants’ results tell us if one system is perceived as being ‘better’ than the next (we may also have to try and quantify what ‘better’ is!).

Screen shot 2013-10-15 at 09.16.06

“Process diagram for the preparation of a perceptual evaluation”

From Bech & Zacharov (2006).

I guess all but the most clearly focused, instantly-inquisitive, single-minded researchers knows exactly what they want to ask along with the ‘how’ and ‘why’ from the outset. The formulation of a research question is an organic process that can go around in circles while it’s kicked into shape as the subject is explored and relevant literature reviewed. In this age of tweeting, blogging and a more liberal approach to both open education and online collaboration, the shaping and honing of our lines of investigation can only benefit from a sharing of ideas and data, and being open to the input and advice from our global network of peers out there on the world wide web. Therefore please feel free to leave comments if you have any suggestions on how we may develop our research, and get in touch if you’re in the Manchester area would like to take part in the listening tests.

Some useful links:

Bech & Zacharov’s full text can be found here, though as a more accessible introduction to perceptual audio evaluation this tutorial is a good staring point.

For the technically minded, these are ITU recommendations and EBU standards relevant to this project.

EBU-TECH 3324 (2007): EBU Evaluations of Multichannel Audio Codecs.

ITU-R BS.775-3 (2012): Multichannel stereophonic sound system with and without accompanying picture.

ITU-R BS.1116 (1997): Methods for the subjective assessment of small impairments in audio systems including multichannel sound systems.

ITU-R BS.1387 (2001): Method for objective measurements of perceived audio quality.

ITU-R BS.1534 (2003): Method for the subjective assessment of intermediate quality level of coding systems.

References:

Bech, S & Zacharov, N. 2006. Perceptual Audio Evaluation: Theory, Method and Application. Wiley. Chichester.

ITU. 2003. Recommendation ITU-R BS.1534: Method for the subjective assessment of intermediate quality level of coding systems.  Retrieved 13th October 2013, from http://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1534-1-200301-I!!PDF-E.pdf

ITU. 2012. Recommendation ITU-R BS.775-3: Multichannel stereophonic sound system with and without accompanying picture. Retrieved 12th October 2013, from http://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.775-3-201208-I!!PDF-E.pdf

Kassier, R. Lee, H-K. Brookes, T. & Rumsey, F. An Informal comparison between surround sound microphone techniques. AES convention paper 6429. Barcelona.

Rumsey, F. 2001. Spatial Audio. Oxford. Focal Press.

 

It’s all computational. It’s all educational.

From British Sea Power’s “Monsters of Sunderland”

I’ve spent a worthwhile few hours this week watching Martin Weller’s YouTube playlist “understanding OER in 10 videos” as a first delve into the concepts of “Digital Scholarship” and “Open Education Resources (OER)”. I’d recommend watching all the videos, but the talks on OER by Dr. David Wiley and Gardner Campbell in particular were a fascinating insight into the current state of learning in the digital age, the unfathomable opportunities – both educational and social – we are provided with, and the obstacles we face in trying to break free from historical ideologies, commercialist paradigms and, indeed, our own imaginations.

The essence of OER is also a difficult concept for some, and in many ways it’s major obstacle, and that is sharing: using the Internet to freely disseminate information on a global scale. In his talk on OER Wiley defines it as “teaching materials that are freely shared and allow us to engage in the four R’s”, these being Re-use, Redistribution, Revision and Re-mixing (Wiley, 2010).  Wiley talks candidly about the need for “openness” in education and “generosity” in the face of opposition plagued with what he refers to a “loss aversion”. Not everybody wants to share what they know, or give it away for free. Granted, it’s a human condition, but we’ve changed before with the advent of technological developments. The parallels he draws between the rise of the Internet and the impact of the printing press in the 15th Century  – a time when information was in demand but it’s distribution was choked by “outdated thinking reinforced by Draconian law” – is uncanny to the point of being eerie. That first collision 500+ years ago was a precursor to what we now refer to as the reformation. With information now immediate and mostly free on the Internet, and numbers in higher education estimated to at least double over the next twenty-five years, there has never been a more valid argument for reforming the way we teach, learn and share our collective knowledge.

Yet for every positive thinker who sees the benefit of an open, networked world with free information, there also exists those in opposition: egotistical academics, greedy corporations and parsimonious institutions for whom the concept of ‘giving something away’ is completely alien. Is this aversion rooted in competitiveness or commercialism? Both, probably, but as Wiley points out expertise is and should be non-rivalrous, in the sense that it can be “given without being given away” (Wiley, 2010).

A big part of the OER debate then comes down to sustainability. If something is being ‘given away’ for free how is such content generated and maintained if no-one pays? (Downes, 2007). However, many view current commercial distribution systems as inefficient. Statistics by Kansa & Ashley (2005, cited in Downes, 2007) indicate that approximately 27% of research papers are published, and only 5% openly shared. They argue that the value of data increases tenfold when openly available. Downes lauds the benefits of OER, but accepts “only if the cost can be borne in terms of funding and practicality”.  An online-version of Downes paper with his proposed funding models can be read here. Caswell, et al (2008) herald OER as the chance to “deliver on the promise of the universal right to education” claiming it can provide learning content for unlimited users at “no additional cost beyond the original cost of production”.

While on the subject of cost, another Wiley Video looks at the impact of OER in the context of the “COUP” framework. “COUP” being an acronym for “Cost, Outcomes, Use and Perception” (of OER resources by students and faculty staff). Here he argues that open, “custom-made” textbooks are more cost-effective and have more impact than traditional textbooks. Students can create their desired content for their own book and print this for less than the cost of a library book, (which they cannot make notes in nor annotate!). If they choose to use a digital version of this resource (and use bookmarking sites such as Diigo.com to highlight important parts of the text) the cost becomes zero.  Less cost, more impact (Wiley, 2013).  Additionally Caswell, et al (2008) believe that good OER practices can change “distance education’s role from one of classroom alternative to one of social transformer”.

Image

Source: http://oer.lbcc.edu/

However, OER also exist outside the framework of “traditional learning’. In his book “The Digital Scholar” Martin Weller describes new ways of researching and working as the Internet reaches out across the globe. While books and journals still form a constituent part, more and more E-books and E-journals are being used for teaching and learning. Social bookmarking sites are utilized so we may see where our contemporaries are getting their references. Social networking sites and personal blogs are becoming an increasingly important forum where like-minded, open individuals can collaborate, share their ideas and data and solve problems together (Weller, 2011). The TEDxTALK by Michael Nielsen gives a great example of how open data-sharing and blogging about it can solve not one, but a multitude of problems as open education manifests into “Open Science” or “Research 2.0”. It’s an engaging talk covering some key issues surrounding approaches to research and sharing data via collaboration on the Internet. Nielsen (2011) is a staunch believer that “publicly funded science should be open”.

Image

Martin Weller’s “personal work/leisure/learning environment” or “PLE”. Source: http://nogoodreason.typepad.co.uk/no_good_reason/2007/12/my-personal-wor.html

The final talk in the Weller playlist by Gardner Campbell demands more attention and focuses on some key areas of conflict that OER face. Similarly to Wiley, Gardner baulks at the attitude of peers, who (of OER) claim, “It may be learning, but it’s not academics” (Campbell, 2012).  Amongst other things, Campbell discusses the role of  ‘Massive Online Open Courses” (MOOCs), Recursion, and the idea that we should be building ‘The Web’ together. This however, in his opinion requires a level of thought that we are not completely accustomed to nurturing, and he references Gregory Bateson’s “hierarchy of learning” (Bateson, 1972) to illustrate this. A good introduction to Bateson’s work is the paper by Paul Tosey as he discusses it in the context of learning for management development and higher education.

An interesting element of Campbell’s talk was the topic of “trans-contextual syndrome” and the idea of the “double-bind”. As an audio engineer I’ve a come across this concept many times when requested to “make the snare have more crack, but also make it softer” or “make the guitars a wall of sound but push them right back in the mix so we can hardly hear them….”. What we’re talking about is two conflicting demands.  The example Campbell offers is that of the ‘media blog grading rubric’ and is something that should resonate with any of my peers reading this, as it relates to a situation where students are asked to blog as part of their module and show ‘creativity and originality’, while embracing a culture of sharing… re-using, redistributing, revising and re-mixing. Being instructed to be freethinking and open while still adhering to strict criteria in order to demonstrate understanding and ultimately pass, represents a conflict inherent in the double bind. Watch Campbell’s video and what you will see is that while this may be a contradiction, working through our trans-contextual syndromes to a final outcome, customizing and constructing our own education and building the Internet together might just be the answer. The double bind: A tough nut, but one we have to crack.

I should probably have started by telling you that I didn’t learn about digital scholarship in a book. I didn’t conduct a literature review of journals to gain an understanding of OER. And, while I may have sat in a lecture that broadly outlined the concepts, I did the majority of my research – my learning – using freely available Internet resources.

 References:

Bateson, G. 1972. Steps to an ecology of mind – collected essays in anthropology, psychiatry, evolution and epistemology. Retrieved 6th October 2013, from http://www.edtechpost.ca/readings/Gregory%20Bateson%20-%20Ecology%20of%20Mind.pdf

Campbell, G. 2012. Ecologies of Yearning. Open Ed’ Conference Keynote. Retrieved 5th October 2013, from http://www.youtube.com/watch?v=kIzA4ItynYw&list=PLWZ0HETZsWsN2h70E3MFCUQD1kh59wTxt&index=10

Caswell, T. Henson, S. Jensen, M & Wiley, D. 2008. Open Educational Resources: Enabling universal education. International Review of Research in Open and Distance Learning Volume 9, Number 1. Utah State Univeristy

Downes, S. 2007. Models for Sustainable Open Educational Resources. National Research Council Canada & Institute for Information Technology. Canada.

Nielen, M. 2011. Open Science now! A TEDxTALKS. Retrieved 5th October 2013, from http://www.ted.com/talks/michael_nielsen_open_science_now.html

Weller, M. 2011. The Digital Scholar: How Technology Is Transforming Scholarly Practice. Retrieved 6th October 2013, from http://www.bloomsburyacademic.com/view/DigitalScholar_9781849666275/acknowledgements-ba-9781849666275-0000023.xml

Wiley, D. 2010. Open Education and the future. A TEDxTALKS. Retrieved 7th October 2013, from http://www.youtube.com/watch?v=Rb0syrgsH6M&list=PLWZ0HETZsWsN2h70E3MFCUQD1kh59wTxt&index=2

Wiley, D. 2012. The Open Eduction COUP. Retrieved 6th October 2013, from http://www.youtube.com/watch?v=0y5OibrBwsI&list=PLWZ0HETZsWsN2h70E3MFCUQD1kh59wTxt&index=8