A Dip in the River – An interpretation of John Cage’s A Dip in the Lake for Lafayette Indiana

Logo

Over the past semester I did a re-imagining of Cage’s A Dip in the Lake for the Greater Lafayette, Indiana area. It was a pretty interesting process, and despite my love of recordistry, not something that I’d have usually embarked on.

Score for A Dip In the Lake

Background
I think I’ve gotten deep enough into this piece that it’s a little hard for me to describe what it is concisely. The original A Dip in the Lake is a kind of Visual composition for a sound collage. I’ve not been able to find a lot of detail on his composition process, but it looks like Cage just selected random points on a map of the Chicago area. A list of addresses was created from this map. The composition was published in 1978 by Henmar Press, Inc, and copies are available in some libraries. 

Aside from the location list, little direction is provided in the original work beyond the text:

A DIP IN THE LAKE: TEN QUICKSTEPS, SIXTY-ONE WALTZES, AND FIFTY-SIX MARCHES FOR CHICAGO AND VICINITY

for performer(s) or listener(s) or record maker(s)

(Transcriptions may be made for other cities, or places, by assembling through chance operations a list of four hundred and twenty-seven addresses and then, also through chance operations, arranging these in ten groups of two, sixty-one groups of three, and fifty-six groups of four.)

Funny how seeing the above direction describes the work better than my earlier attempt. The lack of specificity is really nice. It opens the work up to be as simple or difficult as you want, and free for all kinds of interpretation. There are so many different ways you could go about this! One that just dawns on me is use of video instead of just audio..

The lack of specificity could also be a burden, depending on how you look at it. I generally like to have specific direction when I’m working on something like this. Having a logic, or an ideal outcome, or even a reason for doing the project in the first place are generally important, and not knowing these things can be crippling. [This is probably the biggest issue in my life right now as I go through a graduate program in Art and Design.. Specifically the areas of Industrial Design and Visual Design. I’m learning _creative_ professions, but in reality I’m just learning to spot and regurgitate trends.] This is where I really got into Cage’s philosophy. It’s almost like decision nihilism. The artist’s choice is totally irrelevant, or rather, the beauty lies in choas, and making decisions undermines that.

My Version
I’m getting too far into the theory. To step back, for my re-imagining and realization of this piece, I fought to use chance where ever possible, and beyond this, I used more technologically determinate methods for doing so than I suspect Cage did. I guess this really makes it easier to be “random”, which I think is a good thing. It also highlights our default use of technology for completing everyday tasks.

MapWithLinesTo start, I chose my locations randomly. I used a website called GeoMidPoint. It was really the first thing I found in a Google Search, but it turned out to suit my needs. It generated 20 GPS locations for me in a radius I specified that mostly encompassed Lafayette and West Lafayette, IN. I only used 20 points (down from Cage’s prescribed 427) to make this complete-able in the given time frame.

My next step was to visit all of these locations to record audio. The quickest means I could think of to get to each GPS position was to enter it into Google Maps. Interestingly, this resolved the locations to street addresses. This was a form much easier for me to use, but it also distorted the data a bit – Google Maps “thinks” in terms of streets, not in terms of locations, and this was evident in it’s translation of the GPS coordinates. For example, one of my GPS locations was in the middle of a corn field. Rather than giving me directions to get to the middle of the corn field, Google Maps gave me directions to the closest road to that point in the corn field, as well as a picture of that spot on the road. It was interesting what we lose in the augmented perception offered by Google Maps. ..You can’t see inside structures either.. or hear sounds from the location.. etc, etc.

I recorded 2 minutes of audio at each location and then proceeded to my next step, which was figuring out how to combine the audio together. Peter Gena, who did the first realization of this piece in 1982 was my primary source of information for the processing of the audio. [1. isn’t it interesting that the composition sat around for 4 years before it was ever performed? 2. In retrospect, I shouldn’t have relied on prior methods in figuring out my own] Gena had the luck of being able to ask Cage himself how the sounds should go together, and it was suggested that he use a similar method to one from another of Cage’s works called Rozart Mix. This involved some interesting (and random) editing of magnetic tape. My recordings started off in the digital realm, so I had to adapt. I initially planned to cut up the audio segments “by hand” in editing software and recombine them according to chance operations, but before long I realized that even with my reduced number of recordings, it would take a really long time. Instead, based on a suggestion of a friend, I used Cycling’74’s MAX software to build a processor that automatically did what I had planned to do manually. It worked wonderfully, and as a side effect, can run infinitely. This immediately made me think it would be something cool to use in a gallery show.

Long winded enough, I suppose. Here is a video for the first of the 4 pieces that came out of this. Photo’s of the 5 locations included in this work are shown – first what Google Maps showed me followed by what I found when I arrived there. There is also video of the MAX patcher at work.

Related links:
More details in the paper for this project
The MAX Patcher I used
Other realizations of A Dip in the Lake: Chicago, Washington DC, Luxembourg Germany, Potenza Italy

Adonit Jot Touch v4 (the new one) troubleshooting

JT4_blinkingGreenLED

 

 

 

Just a quick note that may help early adopters. The new (v4) Adonit Jot Touch stylus doesn’t pair the way the old one (v2.1) did. If you have problems, don’t use the ipad’s bluetooth screen. Just make sure bluetooth is enabled, then go into your Jot capable app. As of this writing, that’s sketchbook pro, PDF pen, and Inspire pro. Make sure you have jot touch enabled in the settings. Pushing the “A” (closest to the tip) button should activate.

This didn’t work for me at first, and I discovered that the reason was that I had already had jot support turned on in the past because I had the v2.1 stylus. I had to turn it off and back on again in order to get the new stylus to pair. (the IT Crowd reference is not lost on me. lol)

..on treating machines like people

I just saw a blurb on huffington post that had Clifford Nass talking about what he usually talks about; treating machines like people. If you’re not familiar, go check out The Media Equation and his other books. Why an article about him in the mainstream media? I’m guessing it’s because he worked on the google glasses. Sidenote: it kind of sucks that you have to commercialize science to make people care.

Anyway, the article is mostly nothing new. What was new from him, at least for me is his concern with multitasking.

What concerns you most about the direction of current technologies?

Unquestionably my biggest concern is the dramatic growth of multitasking. We know the effects of multitasking are severe and chronic. I have kids and adults saying, “Sure, I multitask all the time, but when I really have to concentrate I don’t multitask.”

The research to shows that’s not quite true: when your brain multitasks all the time there are clear changes in the brain that make it virtually impossible for you to focus. If we’re breeding a world in which people chronically multitask that has very, very worrisome and serious effects on people’s brains. For adults it has effects on their cognitive or thinking abilities. For younger kids we’re seeing effects on their emotional development. That does scare the heck out of me.

I have the same concern, and actually wrote a paper about it last semester. By the time the paper was done I had kind of stopped caring about the issue because I’d blown it up into a deep mindmap and kept getting stuck on the issue of efficiency as a sole guiding force to interaction design development. My thought is that we need devices that use less of our attention, but I think the problem is really more human than machine. Given more unused attention, we’d probably still be trying to cram other tasks in there.

I guess I need to revisit this idea. How can we reduce the cognitive overhead of multitasking while still multitasking? A Nass-like solution seems ideal, since we have the ability to deal with multiple other humans. (ex: mother with a minivan full of kids) Surely though there is even a finite number of humans we can deal with at once.

I really don’t have a good answer. I’d love to hear other opinions.

Generative modeling – grasshopper

In the past week, I’ve been doing some early explorations of generative modeling. In my case, I’m using Rhino 5 with Grasshopper. I have to admit that I’m still a bit stymied in terms of finding a _good_ use for it beyond adding some weird detail to surfaces.

If you want to see some examples, check here.

For whatever reason, I set out to build surfaces from joined spheres. It makes an interesting surface.. kind of like lizard skin or some such. Initially I was assigning a random 3D pointfield to a cube (which is the default shape for Grasshopper’s 3D populate) and then using those as the center point of a sphere of random size. It’s pretty, but limited in usefulness.

gen

Sphere-ish object generation with Grasshopper.

I really couldn’t wrap my head around a way to limit the point field to a non cube object, (I suspect I need to use “inside” and then “cull”) so I decided to do it by surface, which seemed like it would limit the number of spheres. (good for processing time)

Sphere surface

Sphere surface

Next I started putting in whole objects. I ran into a problem where the random generator started making the same number a lot, but with the help of my prof, it now works reasonably.

Input and Output

Input and Output

It’s not perfect, but it’s pretty close to what I want. The next step is using this for another project – I’m doing a bronze casting, and thought that this kind of form would be perfect for it. These kind of shapes aren’t great for a lot of manufacturing techniques, but casting should work well.. the only problem will be cleaning up my initial model. It will be a 3D print of this form, and in the nature of the prints from the 3D printer at school, it will be very grainy. I’ve really not come up with a good way to sand a sphere yet…

Anyhow, I think the object I make is going to be a drum lug – a part of a percussion drum that allows the head to be tightened down. see illustration.

Drum Lug

Drum Lug

I figure I can have 8 or 10 of them cast and build a drum with them as a good way to show off the casting design. We’ll see how it turns out. If you’d like to play with this yourself, here’s the setup I’m currently using.

Grasshopper script

Grasshopper script

Mike Doughty is bringing me down. (or, “The future reality of playing music for a living”)

I used to have an interest in the music industry. I think I actually believed that I could potentially, eventually get a job in that mucky muck. With the passing of such ideas, I still have a vague interest in the music industry. Mostly I’m appalled at the way businesses have failed to adapt to the advent of the Internet (suing your customer base for fun and profit) and simultaneously I’m curious to see how things will shake out once the sinking ships finally go down.

There has recently been an exchange in the media whose honesty is one of the more accurate reflections of the current situation of making money off of music. It all started with a blog post by Emily White, an NPR intern, titled “I never owned any CDs to begin with” in which the author brags about the amount of music she has acquired illegally, rationalizing her acquisitions by way of the venerable “artists don’t get any of the money from record sales” argument.

Emily’s blog moved a lot of people to write about it, including almost 1000 comments, a commentary from an NPR staff member, a post from a talent agency co-founder and most importantly for my little narrative, a reaction from David Lowery, the singer from Camper Van Beethoven and Cracker who now teaches music business courses at the University of Georgia.

Lowery depicts the current music industry situation by tracking the flow of end user money; when users download illegally, their money goes to internet providers and manufacturers of computers and phones _instead of_ musicians. While I can see his point, it’s not waterproof, as downloading legally still requires that you buy a laptop and internet service. Regardless, his ultimate point stands – musicians don’t get any money. This is backed up by the fact that most musicians do not, as many lay people suspect, make all their money from touring. He goes further to insinuate that music stealing makes depressed musicians commit suicide; kind of a low blow.

The next step in this conversation of comments comes from Mike Doughty, who in a blog post agrees with Lowery’s description of the situation and takes it a step further in the form of the equation:

less money to record labels = less tour support for bands = fewer bands

Doughty drives this home by positing that a band like Radiohead wouldn’t have survived if they had to deal with this new industry economy. It’s a depressing picture. Not just because of Radiohead, but because there will be fewer creative bands.

I agree with many of his points, but I can’t help but think that his equation is only good at predicting the short term. We’ve already been seeing the fallout of diminished recording industry revenues. What it amounts to is the big labels not gambling as much on quirky acts, and instead banking on the sure bets. This manifests in an abundance of over-produced, good looking pop singers and little else. I feel like this is what Doughty is describing. We’re already there.. My feeling is though that this strategy wont sustain the music industry, or if it does, it’s neglect of all the other non-cookie cutter music will spawn new avenues for bands that don’t fit the mold. Sure, these bands wont be able to tour the way that they have in the past, but does that mean they can’t be successful? I feel like new avenues for music discovery will develop as people who like music other than whats on the radio grow discontent with the Katy Perry’s and Maroon 5’s of the world.

What will these new distribution avenues and taste makers be? I have no idea. That’s for someone else to think up. I think there is plenty of room for it though. Technology has not only given people the power to steal music, but it’s also given people the power to create web streaming and pirate broadcast stations with little financial cost. The web has given us a huge network of self guided discovery, and interactive discovery. You can’t shortchange that. At an even more basic level, will the death of the “getting signed” dream keep people from making music? Yeah, right. Less people will be able to make a living playing music, but is that necessarily a bad thing? Music is a big part of the human experience. I wouldn’t mind seeing it de-commercialized a bit. It doesn’t cost as much now to “be a musician” as it used to. You can buy instruments at Walmart. You can record your music on your laptop at home and distribute it on the internet. Yeah, you wont get Radiohead level famous doing this, but why do you need to be. If the “get rich” factor is removed from the equation, I can’t help but think that cooler, more interesting music would surface. 

 

Note: I’m not deriding any of the authors mentioned above, or trying to say that they are wrong. We’re all just trying to see the road map of the future of music.

 

can “IP on everything” yet?

Back around 1998, Vint Cerf, a guy touted as “the father of the internet” came to speak at Purdue. (yeah.. I know, I thought Al Gore was the father of the internet. derp.) It was an odd time. The internet as we have kind of come to know it (the web), was just starting out, having only been available for public use since 1993. I’m guessing I was still on some form of dial up at the time.

Vint Cerf on Boardwatch Magazine

Vint Cerf on Boardwatch Magazine

I only remember two things from Cerf’s presentation. The first was the Beavis and Butthead worthy slogan “IP on everything”. huh huh. (IP stands for internet protocol, a layer of the infrastructure by which the ‘net works) I don’t really recall, but it seems like he must have said it several dozen times. The gist of the phrase was that in the not so distant future, every electrical device would somehow be connected to the internet. At the time, I think this idea was a little more of a stretch than it is now. Why would anyone want to connect their washing machine to the internet? From a pragmatic perspective, I guess I still kind of ask the same question, but at least we have examples of possible uses. Sure, I’d like a text message when the dryer is done.

As an aside, it’s kind of interesting to me to think that the other end of this IP on everything interaction had few interesting prospects back in the 90’s. I think it was kind of assumed that you’d sit down at your gigantic CRT monitored PC (because macs sucked at this time) and wait for the message from your dryer. Similarly it’s also interesting to think about how many roles cell phones now play (and will eventually play) in our data interactions.

For what it’s worth, the other thing I remember him mentioning was having internet on the space shuttle. I think we’ve already achieved that through some terribly slow radio relay.

electric imp guts

electric imp guts

imp development board

imp development board

Anyway, I should probably get to the point. I recently read about a product called electric imp that provides an infrastructure for ol’ Vint Cerf’s idea. It appears to be a system of some stuff mounted in an SD card case, and a socket that’s small enough to fit in most devices.. like a light socket. They use WiFi, so you don’t have to pull a bunch of wire in your house to get it all connected.

Twine

Twine

This isn’t the first project of it’s kind. “Twine” came along earlier on kickstarter, providing a WiFi-ed sensor platform. Twine’s blocky form and external-ness cause it to strike me as more of a hobbyists toy than something seriously embeddable; not to mention the fact that it costs $100 a pop at the early adopter level. The imp on the other hand appears ready to ship to OEMs for inclusion in products at around $50 a piece.

What can you do with these things? My first thought was along the lines of power usage logging. A little program could easily be written to harvest the data from all the imps and then cross reference it with the current electricity cost. This would show you in dollars what each device in the house was costing you. Possibly, depending on how robust the imp is, you could even tell what times devices were plugged into certain outlets based on current draw.

Really though, just power logging is kind of short sighted. I think the real fun of these is going to be bringing a high level of home automation to the average joe. Having this infrastructure means manufacturers can more cheaply make automatable things, like power curtains that you can set to open and close at certain times, light fixtures that turn on and off, etc. I’m not a systems integrator, so I don’t have a lot of cool examples, but I think you get the idea.

[EDIT]

Lauren just brought up the very good point that the electric imp could serve as the hardware basis for an idea that Scott Jenson of Frog Design spoke about at the IDSA Midwestern Conference a few months ago. [View the video here if you are an IDSA member.. I think it only works in ie. OR if you’re like me and don’t think the IDSA membership is worth it, see basically the same talk on vimeo] From the IDSA webpage for the talk :

Mobile apps are on a clear trajectory for failure. It’s not possible to have an app for every device in your house, every product you own and every store you enter. Much like how Yahoo!’s original hierarchy gave way to Google’s search; applications have to give way to a just-in-time approach to applications. This talk will explain how applications must give way to a more universal approach to application distribution: one based on the mobile Web and cloud services. The problem, of course, is that the mobile Web has both hands tied behind its back. Any mobile app today is locked away behind a browser ghetto: in effect, a sub OS inside a larger mobile OS. This isn’t just an arbitrary technology debate. A just-in-time approach to application functionality can unleash entirely new sets of application, ones that are impossible with native apps. This talk will lay out how this problem can be fixed, and what changes need to take place, outside of just HTML5, for it to happen.

Scott Jenson – Why Mobile Apps Must Die – BD Conf, Sept 2011 from Breaking Development on Vimeo.

I think what Scott is pushing for is an ad hoc network of “stuff” in your personal area. In much the way new WiFi networks might appear on your iPhone when you are in their range, WiFi enabled everyday objects would make themselves known to you as well. For example, when I’m in my office building, I might get a phone notification that there is a vending machine down the hall, and it is doing a sale on diet coke. Not only does this bring about a kind of Just-In-Time product/service delivery model, but it also can makes sales more efficient – Ideally, the coke machine would want to sell all it’s stock before it’s refilled, if it has a surplus of something as the refill date approaches, it could make those cheaper to move the product.

Other examples might include getting a notification about bus schedules when you’re near a bus stop. The underlying idea is that you don’t have to search for the information you need, it comes to you.

It’ll be interesting to see where things go from here.

Thoughts about the physicality of communication devices

I saw a website article a few weeks ago displaying new conceptual models of iPods/iPhones. Most were wearable items like a ring or bracelet. After some time, I realized that these concepts were kind of sitting uncomfortably with me. I guess I just have a difficult time believing that the next generation communication technology interface will be something you wear. I’m prone to thinking that we are already at a pretty efficient interface ideal with the iPhone/Android/etc. At least until such devices are more bio-integrated and worn on the inside of our bodies. That is a subject for another time. For now lets focus on the current crop of smart phones.

The brief physical anthropology of communications devices:
I think it’s definitely possible to see an evolution in electronic communication devices. Skipping the obvious face to face methods that have existed for thousands of years, I think the telegraph is a reasonable starting point for a communications technology as the term has come to be generally understood. The interface was stationary and passed information serially using only boolean data. Next came radio transmission, which in retrospect seems like more of an underlying support technology, allowing the telegraph to be mobile. Then the telephone, which began as a stationary unit for parallel audio transmission. Then we slid the radio technology under audio transmission and had what we now know as AM/FM radio, which at the time supported stationary transmitters, movable receivers. The receivers were too large and heavy to be moved regularly. Next the radio technology came to the telephone and we had wireless handsets which were very mobile, but had limited range. Soon the first cellular phones appeared, with gigantic battery packs and resigned largely to emergency use in a car, or for military communication. They slowly shrunk in size, and picked up more casual use and overlapped with the user base of landline phones. Computers also came on the scene, initially adopting a typewriter like interface for input and output. This hasn’t changed much from the keyboard/display setup we are still using with computers today. And lest we forget the fax machine, which I feel was already obsolete shortly after it hit the scene, yet for whatever reasons still has quite a user base.

So at this point, this is probably looking like the so-and-so begat so-and-so bit from the book of genesis in the bible. We’re about caught up to current though. There already seem to be a few instances of convergence when a new technology or social use comes along. So here we are with a rapidly shrinking cell phone, and highly mobile laptop computers with wireless connectivity as well. These user bases overlap, and we start seeing the functionality of computers in phones (instant messaging, email, web browsing) and phone functionality in computers. (VOIP such as Skype, et al) it kind of makes sense to combine the two, and here we are with iPhones, Blackberries and Androids.

The actual interface:
Ok, so we understand a bit of the physical anthropology of the communications device, lets take a closer look at the interface. The profile we’re looking at is an object that can be operated with one hand and stored in a pocket, like a cellular phone, with an approximation of a desktop/laptop computer’s capabilities for high resolution display, data storage, input capability and processing power. Along the way we also convergent-ly picked up the functionality of digital cameras and music players. It’s interesting to me that we seem to have taken more functionality from the computer and shoe-horned it into the small package of the cellular phone. I believe that this illustrates the strongest aspects of each device. It’s also interesting that the camera and music capabilities are easy to tack on since the requisites for computer functionality provide an easy infrastructure to add these other functionalities.

Since the general form is more like a phone than a computer, it’s easy to see that the interface of the phone functions will be similar to that of a standard, non-smart, cell phone. The physical form of the computer on the other hand, was large, and this size was mostly occupied by the I/O elements. A smaller screen, and likely lack of a physical keyboard show the need for a modified interface. It’s very important to note that there is a trade off here. What we have wound up with interface wise is a stripped down version of what MS windows and MacOS have been all along – a list of clickable icons. In absence of a mouse, we are now using touch screens. A keyboard is emulated, but almost all incarnations of this idea pale in comparison to the efficiency of a standard computer keyboard.

What am I getting at:
After all that, I hope you can see my point. We have arrived at the modern smart phone handset through a kind of natural selection, adopting traits of communication devices we find beneficial and leaving others behind in favor of more favorable traits. The beginnings of a move away from a physical keyboard illustrate this idea – the small, portable size of smart phones might be more important than the typing efficiency of the old keyboard. As a result, we also see the social ramifications of this with truncated language (O I C. U R welcome. LOL.) use starting on mobile devices and spreading to more traditional forms of communication. Could we assume that the use of mobile devices for communication is more important than maintaining traditional language norms?

The future:
I personally cannot see any immediate jumps away from the current smart phone hand set. It seems like a very flexible platform that has not been fully tapped for functionality yet. I believe that a more functional voice control, such as the one in the iPhone Google app, and the Android maps feature, will be the next step with this platform, allowing information request and retrieval to take place over a headset with limited physical interaction with the hand set. This would be highly dependent on voice recognition technology, which on the iPhone and even on desktop computers seems to be a ways off. If and when the VR technology catches up, concepts like the iPhone ring or bracelet will make a lot more sense, but until then seem like they would just be a hassle to interact with.

For more realistic future implementations I hope to see gesture control and perhaps more accelerometer control. But who knows what we will see.

Photo credits:

"What the hell is Brizzly?" or "Twitter as a wrapper for data transfer"

Some of you may have seen Brizzly.com, a site with very vague information regarding what it is – “Brizzly is a simple way to experience the social web. You can request an invitation code below and we’ll let you know when we have them ready. (Soon!)” I recently received an invitation to the site from @BrandonButram and figured I’d check it out despite my limited interest.

What Brizzly seems to amount to is a slightly thicker client interface for twitter. Functionally, this means that shortened URLs like those from tinyurl and bit.ly will appear as the full URL that they refer to, as well as links to images or youtube videos showing up as the actual content rather than just a link to it. I find this mildly handy, but currently of limited actual use.

BUT, whats interesting to me about this is that it kind of shows twitter posts as encoded or compressed messages. ie: a small message that represents something larger. This isn’t a ground-breaking concept since URLs are basically the same thing, but I suppose we take this for granted. So this takes us back to the initial benefits of twitter – messages are short for maximum interoperability with low bandwidth devices. Rather than using this as just 140 literal characters, we’re now including representative links to external information. This is cool in it’s own right, but to get back to the original goal of high interoperability on low bandwidth devices, wouldn’t it be cool to be able to compress larger data into these 140 character packages so there would be only one small data transfer instead of one small transfer and then one big transfer from the referenced site? a quick google search yielded just such an idea in practice by a couple of fellows attempting to encode an image into 140 characters.

Another example of binary data transfer via twitter is encoding and breaking up a file over several tweets. Personally, I think this method misses the point, but interesting nonetheless and reminiscent of long-ago floppy disk installs and BBS based transfers.

At the end of the day, I am unsure if any of this really matters since bandwidth is relatively abundant in most homes, portable computing/communication devices and even the International Space Station. I suppose developing countries could benefit from some kind of generic 140 character packet size protocol for wireless data transmission, but even such countries are advancing rapidly. It will be interesting to see if the future of twitter lies in continued social media in a “What are you doing?” way, or if it will come to be more than that.

Choices in computer based music players.

So I have this dilemma. I only play MP3 based music these days, yet I am crippled by habit. I still cling to Winamp as my MP3 player of choice, despite the fact that it really hasn’t had an new features useful to me since the 90’s. I’ve finally weaned myself off of Winamp for digital video viewing, largely because it didn’t have all the codecs I need built in, and VLC did.

I could just keep on using Winamp for music, but here’s the bigger problem – since I’ve had an iPhone, I’m using it more and more for MP3 playback. This is fine and good, but with few inconvienient exceptions, I HAVE to use iTunes for management of my music library. I really don’t like iTunes. It’s slow, I find it’s interface cumbersome and un-intuitive, and simply put, it’s restrictive. I’ve grown accustomed to “manually” managing my music library, storing artists by folder, and adding them the same way. iTunes makes this kind of management a real pain, especially since I have a lot of music with missing or incorrect tags.

Lately I’ve been making an effort to use iTunes for listening to music on my computer more, just to I can get the feel of it. I don’t really think I like it, but at the same time, I haven’t fully gotten into it. For example, the idea of “playlists” is still rather foreign to me.. Probably doesn’t help that I generally listen to albums more often than individual songs.

It strikes me as strange that the experience of listening to the same music via different applications or platforms can be so different. I wonder if design cues from outmoded devices help transition to new applications. Maybe I like Winamp because it has a cassette/CD style transport control, while iTunes just has play and a non-linear web-based navigation system.

Portable storage issues..

When flash memory “thumb” drives first hit the market, I really didn’t have a use for them. Network accessible mass storage and the limited capacity of the portable devices made them less than relevant. Once they reached 2G in size, quadrupling my network storage, they became a lot more practical. These days I carry a 2G flash drive to shuttle data back and forth from home and work, and I have an 8G at home that I use for quick maintenance file transfers on my home machines. As the capacity increases, I’m finding that I’m relying on these devices more and more which is particularly scary to me since they are so easy to lose track of. Small size, plus the tendency to forget that they are hanging out of a lab computer could be a recipe for disaster. I’ve already started a regimen of backing up my main flash drive weekly, but this still doesn’t account for someone finding your lost drive and absconding with personal data. I’ve lately taken to making the volume label on flash drives my phone number, so at least if lost and then later found, it will be relatively easy for the finder to contact me. I also toyed with the idea of attaching the drive to a retractable keychain, but that seems a bit much.

Anyone have other ideas for keeping track of flash drives?