This is one of the most moving talks I’ve ever seen at a technology conference. Robin talks about the history of assistive technology for the blind during his lifetime, and the dramatic change that the iPhone wrought. The blind have an old joke that asks “How many blind people does it take to cross the street?”, and the answer was “Two: one to push the shopping cart full of devices for car-watching, curb-finding, direction-mapping, etc. And another to ask a sighted person for help.” An affordable pocket computer with motion sensors, an accelerometer, a camera, and a thriving app ecosystem has changed all that. Robin went on to detail what specialty apps he uses, which mainstream apps are (and aren’t) optimized for accessibility, and showed us the nitty-gritty of how technology changes his life and empowers him every day.
With technology came opportunity
In the 1980s, the PC revolutionized opportunities for disabled people in the home and workplace. This is how Robin got through his education.
Technology and adaptability allowed for great contributions from people like Hawking.
All you need is a single moving body part or the ability to make a sound or puff into a tube, and you can control a computer.
An iPhone changed the game: inclusion, power, and price. Now this power is in peoples’ pockets.
[* Robin plugs audio-out into computer*]
NB: One nice thing about being blind is that if you ask for assistance at the train, they always let you sit in 1st class.
Another is that I don’t know what people look like; their physical appearance has no impact on me. I don’t know whether they’re old, young, whatever.
[Video of Joshua Neely, blind].
single iPhone replaces multiple devices
two ways to navigate iphone:
touch item on the screen (and voice announces icon)
swipe left or right for previous or next
Once it says an item, it’s selected. double tap anywhere on screen to open it up.
Compass will announce direction. This is really useful for blind people who are disoriented with regard to direction, espcially in wide-open spaces like parking lots or fields.
Specialty products at specialty prices.
Talking GPS was £700, talking notetaker was £2500.
Adjust speech rate with two fingers moving clockwise
Turn screen off and double battery life
Light detector app: there’s no talking ovens, so this is how you tell if the oven’s on.
Money Reader App: he’s waving a bill under the iPad camera and it announces “20 pounds”. It takes a few tries for him to get it; usually, he does this on the iPhone (better camera, flash).
LookTell Recognizer: similar treatment; wave objects under the app and it’ll announce.
Apps vary wildly in their accessibility.
Skype: Robin can access the dialpad, contacts, calling. All of the actions speak and can be navigated by touch (without sight).
Adobe Connect (Skype competitor): none of the objects are exposed to accessibility. He has no idea what’s going on.
Pages: a working word processor, fully accessibility-enabled. It reads text, tells him what’s selected, meta information, screen content. Voice will let him navigate, read content. New iPad will allow dictation. He’ll use a bluetooth keyboard; various multi-touch gestures are available via keyboard shortcut. It’ll read his deletions and meta information around what he types. Very accessible. iPads are the tools of choice for schools in the US.
Facebook: lots of accessibility shortcomings. There’re other apps that do similar things to FB, but some apps capture the audience; he can’t find his FB friends anywhere other than FB.
Lanyrd (app for conference planning): totally non-accessible. Can’t even sign in.
iBooks: Everything is exposed and accessible.
Kindle: Lots of unlabeled images. Listen to the text. Very little meta information. There’re other Kindle readers for the blind (and the Kindle hardware is fine) but the iOS app doesn’t work.
Accessibility is great, but its much slower than what sighted people can do. That sort of inefficiency is multiplied when you have to jump from email to calendar and back.
James has an fascinating and insightful take on how Art Direction—a concept from the print world—works in new ways on the web. He shares his thoughts about how to manage a brand across large groups of independent teams, as well as several really interesting implementation ideas and hacks. Head past the jump for the full notes.
Art Director at Tribal Group, an education services & technology co with ~2500 people.
Used to be UI Designer, was promoted to Art Director after a shake-up.
Now his brief is to tend to the brand across an agency split into hundreds of small agile projects.
His predecessors were from the worlds of Print or Marketing.
Veroique Vienne: “The role of art direction is literally to direct attention.”
This is curation: it’s a space to view content. Make the space as neutral, non-distracting, as possible. Let the focus be on the content.
That’s not to say that we don’t direct the user through the space. We do that with signposts and wayfinding.
Pinterest typifies this notion of curation on the web.
It’s very literal (photos) with minimal navigation
is similar to pinterest
This sort of personal Art Direction is possible on a small scale: portfolios or small projects. How to do it at scale?
At Tribal, many agile teams are focused on their individual projects at all times; not focused on the brand.
Traditionally, Brand Guidelines have tried to serve this role.
Tools: colors, fonts, backgrounds, use of imagery preferences (e.g. “no posed photos” rule).
Also collateral: business cards, stationary, annual reports.
Static Brand Guidelines
Created a web color palate. When he got there, they only had RGB and CMYK. He added hex.
The ugly side to Brand Guidelines: brand police.
Brand Guidelines are static documents.
Often the Brand Police are charged with enforcing the guidelines even if they don’t understand the motivations for the guidelines.
Companies born on the web tend to have more fluid brands, e.g. Google’s whimsical logos, the MIT Media lab (”in NY”!) has a generative logo with 40k variations.
He started looking at more flexible approaches: creating different logos as brushes in Illustrator, which could then be used in different ways.
Making Brands Dynamic
Created a Brand Toolkit rather than Brand Guidelines.
A consistent look can be achieved across sites by focusing on the plumbing: common elements most sites require. Therefore, JF create a Brand Toolki Collection of Icons (for common activities like wayfinding, sharing, etc.).
This allows for a consistent experience across sites without heavy-handing top-down brand-policing.
Started w/ 25 icons, and it’s grown over time. Icons for globe, window, chart, document, trash, youtube, paper clip, linked in, test tube, ruler, etc. etc.
To manage this duplication of work, they started using image sprites for logos and brand elements like the icon set. This is really useful for the product suite because they can share the sprite across projects.
If they re-brand or tweak the icons or even change the logo, they can change them everywhere as long as they keep the sprite coordinates constant.
Started with Photoshop Etiquette.
How to share source files in a way that lets others use them easily?
These are editable, living documents that will be repurposed.
UI Pattern Library
Dev teams work in silos and then need to share more, so they started using Yammer to facilitate communication among the teams. 2500 people in Tribal(!)
Pattern library helped share.
Sharing HTML and CSS snippets across the brand.
Don’t have to re-invent the wheel.
Tribal Design Principles
1. Define a vision, through clear guidance and a brief.
They created a structured brief, which evolves over time, but allows them to have a common starting point.
2. Be flexible, and embrace the change that comes.
Don’t set rigid structures; as technology and media changes, we can adapt.
3. Aim for consistency in the quality of experience.
Rather than uniformity across all products and platforms, aim for consistency for the user.
4. Share the assets, patterns, and ideas
They use Sharepoint (which is kind of clumsy) and are looking for something better.
5. Democratize the design process.
Involve all the people and get their input. This isn’t design-by-committee, this is inclusion of the team: client, product owners, everyone.
Early assumptions about designing for mobile may not have been right
JC wrote “TapWorthy: Designing great iPhone Apps”. More of a general touch-ui specific book than exclusively iPhone.
Q: “What’s your favorite app?” A: uhhhhh…they all drive me crazy? Maybe MindSnacks. Discuss w/ yr neighbor.
Software used to be very grey and dry, but it’s tough to get you to stop talking about your favorite app. It’s an engaging question. First the web, and now especially mobile, have made software much more human and lovable and fun.
We’re all anthropologists here.
Though we design monolithically, there’re lots of mobile cultures.
Myth 1: Mobile users are rushed and distracted
Mobile isn’t actually mobile anymore. It’s on the couch, in the kitchen, on the 3 hour layover. We have time. Mobile != Desktop Lite
Example: alibris (amazon competitor) left its main differentiating feature (rare book sales) off the mobile app.
28% of US mobile web users mostly use the web on mobile
25mm people only see the web via a phone
Making all content and features is a “civic responsibility”. Mmm…
Myth 2: Mobile == Less
Jacob Nielson recently advised a separate, feature-reduced site for mobile. JC does not agree.
Don’t confuse context with intent.
Myth 3: Complexity is a dirty word
Complexity is good. It’s powerful.
Mitigate confusion, not complexity.
Figure out what the user needs.
Example: “Do I need an umbrella”? app is perfect for JC. For his Weather-Channel-watching father in law, it’s condescending.
It’s not that ‘less is more'; it’s that ‘just enough is more’.
Original Facebook iOS app was desktop-lite; users were incensed. “No comments?! No photos?!” It’s since grown to include ~90% of the desktop feature set, and it the most-used FB interface.
Don’t manage complexity all at once; manage it through give-and-take.
Don’t confuse clarity and density
“Every screen should have a primary task”.
Myth 4: Extra taps and clicks are evil
We’re not on dial-up modems anymore. A lot of “minimize clicks” ethos had to do with mitigating the effects of latency.
“Every tap should deliver satisfaction”: more information, a smile, whatever.
“Progressive disclosure”: give a little bit at a time, as its asked for.
An interesting opportunity for phones is that there’s probably only one user; you can train a specific user. E.g., twitter was hiding some controls. To educate the user, they opened the screen w/ hidden controls visible, and then hid them after 250ms. Later, they turned off the animation once the user triggered the “Show more controls” button.
“Mobile == More“
How can mobile do more? It’s got cameras, microphones, gyroscopes.
Myth 5: Gotta have a mobile website
You need a great mobile experience, but a mobile website? Maybe not.
There is no ‘mobile web’. Don’t silo by device.
if “www.mysite.com” redirects to “mobile.mysite.com”, you’re doing it wrong. URL == Uniform resource locator.
We only know device context, not user context.
Ideally, there should be One Web.
The mobile site shouldn’t have less stuff than desktop because there’s a smaller screen. It should have less stuff because most of the stuff on desktop is crap.
Luke W.’s whole “mobile first” point is that small-screen design constraints are a useful filter for everything.
Content runs the show. Your interface is a collection of apps plugging into the wellspring of content.
Myth 6: Mobile is about apps (or websites)
We tend to focus on a single container (e.g. apps or websites)
An app isn’t a strategy; it’s an app.
Your product isn’t an app, or a website, or a feed, etc. These are all containers. You’re product is your content.
We have to pull back from our obsession with presentation and think about content.
Civilians (not just geeks) are beginning to expect their content to be available on every device.
Some people have all the devices (laptop, tablet, phone), others only have one.
We’re also beginning to expect that this content is integrated: when I put down my Kindle and start reading on my phone, or I pause a Netflix movie on the TV and pick it up in bed on my iPad, it picks up where I left off. With iCloud, this is happening with the content we create.
We’re all iCloud developers now.
Every native app should be a web client. The browser’s a native app, btw.
Ergo, we have to drive our design skills further down the stack.
Traditional editorial judgement often disappears when we go mobile, e.g. newspaper layout (reflects editorial judgement about what’s important) vs. many digital interfaces (often simply reverse-chronological order).
Let the robots do it: e.g., the Guardian parses InDesign .indd files to gauge importance and reflect that on the web.
Repurpose design content.
Create content and design strategies that aren’t tied to a particular container.
Mobile != rushed
Mobile != less
Complex != complicated
Tap quality > tap quantity
No such thing as ‘mobile web’
Focus for all platforms
Don’t think ‘app'; think ‘service’.
Metadata is the new art direction
Everywhereness is a design nightmare
It suggests infinite possibilities and therefore infinite priorities.
“Simplify before we suppress.” – Ethan Marcotte
Contexts we tend to design mobile against:
Context #1: I’m microtasking:
Capture lost time: waiting on line, when your dining companion leaves for the bathroom, etc.
identify lost tasks
Find the primary task for an app and make it available everywhere (e.g. todo apps should have an ‘add task’ icon on every screen).
Context #2: I’m local
Mobile is the primary device not because it’s always with you (though that’s important), but because it knows the most about you: sensors, personal data
change context, not content:
Shopper: shuffles your shopping list depending on where your geolocated in the store, e.g. produce aisle, dairy, etc.
Word Lens: point camera at a word in a foreign language, see translation.
IntoNow: foursquare for TV that listens (like Shazam) to identify season and episode.
Save input effort on these devices because they’re not that good at traditional input.
Table Drum: map table taps to drum kit sounds. WANT! Transforms your environment into an interface.
How can we use the superpowers of the device for input?
Context #3: I’m bored
It’s not so much “i’m bored” as “i have attention to spend”.
Software isn’t just for work anymore: fart-apps might be frivolous, but they represents a shift from software-as-business-tool to software-as-distraction and entertainment for civilians (not just geeks).
Exploration is a common theme for escape apps (reading, games, etc).
Work apps have a lot of untapped potential for exploration; quantified life apps are video games for narcissists.
Don’t just optimize for the fastest interaction; give them a chance to slow down and explore.
How do we use these?
usage of device (mobile, tablet, desktop) vs hour-of-day.
iPad use explodes after 6pm or so; it’s the return of the evening newspaper.
Phone is mobile, tablet is portable.
iPhone: very active browsers, more willing to buy, skew older, wealthier, more educated, have more sex than other mobile users.
NB: eBay == 25% of US mobile commerce
Ads are a good proxy for how companies think of themselves and their customers
Mobile touch is not only an interface for the hand, but of the hand.
44pts (~7mm) is a useful touch target (size of the fingertip)
44px is close enough on the web.
e.g. iPhone keyboard is 44pt high.
You can squeeze the other dimension as low as 29pt and it’ll still work, so 44×29 or 29×44 are ok.
iOS uses 44pt heights all over the place (key size, row height, top and bottom nav); it becomes a regular element of the graphic system, almost like a grid. Home icons are 88pts.
Android standardizes on 48dp (~10mm), because there’s enough variability in the hardware that they need a little margin for error.
The closer the buttons are together, the larger they have to be.
Hardware detection area accounts for occlusion (the target is a little below the visual), so smaller targets where you’re really concentrating and trying to be precise are actually harder to hit!
Touch targets tend to be larger than the visual area implies.
If you do have to take tap-risks and jam things together at the bottom of the screen, do the extra work and rebuild the bottom nav so the touch area isn’t that huge.
It’s ok / desirable to make your tap targets larger than they appear.
Don’t just make it easy to read; make it easy to touch. Clarity trumps density.
Don’t just design in Photoshop; put it on the app and test.
Glance test: put it on the phone, hold at arms length, and see if it makes sense.
Bite-size content (e.g. a weather app) should eschew scrolling when possible. Single-screen makes the app feel more solid, almost like a physical object.
You’re designing a physical device.
Where do fingers and thumbs sit? (hot zones).
Varied rules for phones (iOS: controls at bottom. Android: controls at top).
Tablet: push to edges and corners.
Big fat touch targets: at least 44pt.
Progressive disclosure: clarity trumps density.
Design the main screen
Navigation to other views
Make use of the sensors
Create room for exploration
Capture lost time
Design w/ team!
Remember Choose Your Own Adventure books? Tangled navigation paths.
Paths should not cross.
Paths should be predictable and unique.
It may seem that multiple paths give more convenience, but they’re harder to model mentally.
Good for casual browsing, focused content, or discovery.
Good for variable screens, custom content.
Easy to swipe between pages.
Very little interface chrome; good for saving space, but not a lot of inference.
It’s flexible: typically let ppl change order, add cards.
Downside: can’t jump to a specific screen.
Limited to 20 (practically, ~10) cards; past that, there’s no more room for the navigation!
Works best for homogenous content; when the cards are different, it’s tough to remember which screen is which.
Works best for no-scroll screens. It feels like a card, like something physical. We’re not that good at scrolling in two directions.
Very familiar; this is where people start to design by default.
Easily allows for heterogeneous content.
Constant advertisement / reminder for what the app can do.
iPhone is limited to 5 tabs; add more and you’ll automatically get the “More” tab.
DO NOT use the ‘More’ as a junk drawer.
Have the discipline to leave things at 5 tabs on iOS; if you need more, go to Tree Structure.
If you’re on Android, drop to 4 tabs.
Android Ice Cream Sandwich introduces the Action Bar: show one tab, with 1 or more tools (depending on screen size and orientation).
Order: righties will have the leftmost tab in prime tab position, vice versa for lefties. New convention is to have the prime action being the center. How to draw attention? Make it a little bigger (a la old instagram or new foursquare).
Path: interesting vertical tab structure with invoked tabs, but a little weird and tough to discover.
Toolbar vs Tab bar
Toolbar: usually light background. Act locally (on the content on the individual screen).
Tab bar: usually dark background. Think globally. (change to show whatever tools you need to affect that content.)
Don’t show both at once.
Tab Bar Pro’s:
One tap access to all main features.
Clearly labeled menu advertises features.
It’s always there.
Tab Bar Cons:
Limited to four (five?) buttons.
Committing a lot of screen space.
Scales to a ton of content.
Familiar mental model.
Direct interaction w/ content: the word or picture you want to use; it removes abstractions.
Nested folders, a lot like column view in Finder.
Allows longer and more customizable menu options.
Often a list.
But sometimes you’ll see springboards (facebook) or galleries (photo albums).
Not much room for a way out or ‘back to home’. Leads to a lot of ‘back back back’ garbage taps. It’s not easy to switch branches.
Work-around: combine w/ tab bar.
It would be nice to have a gesture to go back. “Swipe across the top” is becoming the norm.
As designers, we need to protect users from actions that will do them harm. Gesture Jiujitsu can help, e.g. using swipes (easy to do, hard to do accidentally) for undo or Return To Home.
One of the great things about touch is that it removes mediation; you deal directly w/ what you want.
Main categories available only from top.
Inefficient to switch branches.
No standard for returning to top level.
iPad is a monster in the tablet space (although Kindle Fire is coming on strong in the US).
It’s too big a screen to be switching pages all the time. Often the phone’s “card-flipping” model doesn’t work as well.
Long-hold is the right-click of gestural interfaces.
Pop-overs are ok for quick interactions.
Use popovers to act on content.
Use popovers for quick peaks.
Avoid popovers for exploration or navigation.
None of the above: new interfaces
Design the personality of your app at the outset.
A personality will emerge no matter what; if you don’t design it, you’ll be at its mercy.
Simple things like backgrounds can change the personality greatly, even while sticking to the same design patterns.
Emotion is a design element.
Q: do you like skeumorphism? At worst, it can be kitschy, but at best it can show you how to use the app. What does your metaphor propose? and what does it promise? If it looks like a book, I should be able to turn the page (I’m looking at you, iPad contacts). Be true to your metaphor when you go skeuomorphic.
Skeumorphism often takes a strong point of view, which can be good or bad.
The Dark Art of Aping Real Objects
Some clones are the same size: Guitar tuner, apple remote. Sounds and animation can enhance that illusion. You’re absorbing all the industrial design thought that went into the original, though you may lose some ergonomic affordances.
Minitures (Chess, GarageBand’s piano) are a bit different. The interface loses its ergonomics.
Sometimes it’s just eye-candy. (E.g. voice capture app w/ its microphone).
Lots of shelves these days. Its just a dressed-up tree structure, but we love collections. It’s fine for things that belong on a shelf (e.g. books), but you’ll lose emotional resonance when the objects can’t really be on shelves and the metaphors break.
When contemplating metaphors:
Is this a problem that can be solved with built-in, native tools?
Are you being too clever? Is the metaphor complicating the mental model?
Is your metaphor appropriate to the device? Phone OS’s are card-based; bringing windows in gets weird.
Do you have more interface than you need?
Don’t be different to be different, be different to be better. Different means I have to learn something new.
Creative interfaces can be joyful. See: BeBot (robot synth app).
Ask: am I going too far? Am I going far enough?
“The sin of pridefully obvious presentation”: Ed Tufte.
The NYT app quacks a lot like a newspaper. Why so boring?
Apple asked NYT to build a demo for the iPad, just before the iPad was announced. Three young designers / developers were brought in: two weeks to do it, no contact with home base, no cell phones. “You’ll be demo’ing to Steve, make us proud.” Eep!
After 2 days, they built a Deck-based version. Realized it was wrong; went back to what became “NYT Editors Choice”. “Strong if conservative first effort”.
Sometimes you can impress most by doing it quietly.
NYT App: “Yawn. It looks like the NYT”. Flipboard: “HOLY CRAP! It looks like the NYT!!!”
Old conventions aren’t necessarily old-fashioned.
Feature the content, not the interface.
Multitouch with Two Hands
Uzu app for iPad: multitouch manipulation of particle generators.
The utility of keyboard shortcuts comes from being able to do them w/o looking. Gestures provide a similar opportunity.
Gestures that begin at the edge should be OS-level gestures: android, WebOS, Meego all did this. Apple used to, but recently has been breaking edge gestures by taking them over.
Buttons require cognitive and physical effort.
Gestures ~= keyboard shortcuts.
Use entires screen as a controls.
Standards are emerging: tap, swipe, pinch/spread, long tap.
Model content as physical objects.
Explore multitouch gestures.
Follow the toddlers; they’re better at this than we are. They haven’t been spoiled by 20 years of desktop interactions.
Make a 3 yr old your beta tester. She won’t understand the content of the app, but she can use the interface and navigate.
“NUI”: Natural User Interface
Shake is a powerful gesture, but can be gimmicky. It takes the focus off the app, and onto the device.
Younger people are more inclined to rotate the phone; older folks stay with portrait.
Landscape tends to be more engaging: both hands are occupied, and the aspect ration is closer to our biological field of view.
The more backward-compatible (accessible) your app is, the more forward-compatible (future-proof) it’ll be.
Mobile’s a great wedge, because it’s got everyone in a panic right now. We’ve been building websites the wrong way for 15 years.
We’ve got the most exciting job in the world!
When designing for mobile, don’t confuse context with intent. – Josh Clark
I’m very excited to head to London next week to speak Code Literacy for Designers at the always-excellent Future of Web Design Conference. Working at Pivotal Labs, I’ve learned a lot over the last few years about how Agile software development and design interact, and I’m really looking forward to engaging in the conversation in one of my favorite cities. If you’ll be at FOWD, please come check it out at 11:15 on Wed! And if you’re in London and want to talk UX, drop me a line!
From the talk description:
Kris Hicks reassures us that the vim path through git interactive rebases need not lead to maddness. If you’d like to do an interactive rebase in your editor of choice (rather than Textmate, the Pivotal default) you can set the GIT_EDITOR flag. So go ahead to the terminal and
export GIT_EDITOR=vim && git rebase -i origin/master
Vim will launch, changes will be made, commits will be squashed, and all will be right with the world. Until you try to save; after making your changes and :wq-ing, the terminal will admonish you Could not execute editor.
The problem is a vi + Mac OS X + git incompatibility with pathogen (the vim package manager). To fix it, add the following lines to your .vimrc file:
Schubert warns us that Rail’s extensions that add to_time method may cast types in unexpected ways: Date#to_time => Time Time#to_time => Time
but DateTime#to_time => DateTime
Use === when checking equality with DateTime and you don’t care about precision (This does not work with Time however)
Your humble author cautions that the new Laullon GitX is not ready for prime time. When adding multiple files with a single click, a garbage commit with a long funny name is created without adding the files.
Instead, consider Brother Bard’s excellent fork of GitX
Ian “Waffles” Zabel mentioned that jQuery 1.6 has been released. Notable changes include case-mapping of HTML5 data- attributes, performance improvements, and more.
Lee Edwards reminds us “It’s Star Wars Day. May the 4th be with you.” <⁄rimshot>
The evening started off with Ben Woosley and myself giving a brief tour of the Pivotal Labs Agile practice, culminating with a discussion of how we integrate UX. Ben and I sketched out a quick talk earlier in the day and then converted it to html slides using the excellent Slidedown. When an early show-of-hands from the audience indicated that most of them weren’t familiar with Pivotal Tracker, we stopped our slides and dove right into Tracker, demonstrating the basics to a sea of mostly nodding heads. A conversation followed with a number of great questions, including discussions of automated design testing, integrating design velocity and development velocity, and the finer points of pair-programming. I’m looking forward to continuing this conversation as the Agile Experience Design meetup continues.
After the Pivotal presentation, a few group members came up and gave short talks. Lane Halley gave a brief overview of the upcoming Agile Alliance convention in Chicago. Lar Van Der Jagt gave a great demo explaining Test-Driven Development using Cucumber, a testing framework that lets users write in pretty-close-to-plain-English, which looks something like this:
and Pickler, which synchronizes user stories between Pivotal Tracker and Cucumber via the command line.
Finally, Jeff Gothelf talked about the journey from waterfall to Agile and gave a great illustration of the use of wikified style guides to aid in the transition and streamline communication between developers and designers.
Thanks to the speakers, the attendees, the organizers, and especially the UX Workshop for livestreaming the whole event!
Last night Pivotal participated in the first ever New Tech Meetup Showcase. The Showcase offered 60 NYC technology companies a chance to show off their wares to a large and enthusiastic crowd, and Pivots Mark Michael, Dan Podsedly, and Ian McFarland held down the Pivotal table, demoing Tracker and seeing what other companies had to offer. The New York New Tech Meetup is the biggest meetup in the world with over 10,000 members, and—this being Internet Week in NYC—many of them were out in force. After the Showcase the action moved to 700-person auditorium where 7 companies gave 5-minute live demos to a rapt house.
The meetup presenters were varied and impressive, running the gamut from human-powered search to some cool geoloco apps (one for social networking and another for 3D mobile iPhone wayfinding) to it-just-works in-browser live video-streaming and production apps, to the NY State Senate’s cutting edge use of social technology to make government more responsive and accountable. The Pivots-in-attendance were especially blown away by two in particular. Aviary is a suite of fully-powered in-browser content creation tools which does for Photoshop and Illustrator what Google Docs did for Microsoft Office: it makes them cheap, available to any computer with a net connection, and facilitates collaboration and sharing. The fact that these apps are fast enough and robust enough to compete with desktop software is pretty inspiring. The second super-uber-cool demo we saw is called MakerBot, a company that’s building and marketing and community-organizing a $750 open-source desktop 3D printer. The kit is open-source, so you don’t actually need to pay MakerBot to get all the parts, but sourcing them yourself is kind of a pain. MakerBot is making it easier for everyone to have and use and imagine a robot on your desk that can build anything you can imagine. Very inspiring stuff, and proof that there’s awe-inspiring cutting-edge tech on both coasts.