“There’s 14 billion people in the world; how amazing would it be to get to know all of them, and to empathize with them so deeply that you could see the entire world the way they all see the world? Instead of our one subjective view of how we see reality, I could have 14 billion subjective views, and through that triangulation, really have almost a true objective view of reality.”—
“[Kim Scott] recommends that managers exclude themselves from big decisions as much as possible. “Somehow people’s egos get invested in making decisions,” Scott says. “If they get left out, they feel almost a loss of personhood. So you get ego-based decisions instead of fact-based decisions. The more you push yourself and your managers out of the process, the better your decisions will be.”
Most of all, don’t let decisions get pushed up. “A lot of times you see decisions get kicked up to the more senior level, and so they get made by people who happen to be sitting around a certain table, not the people who know the facts. Don’t let this happen.””—
I post this because it’s a novel technique, and I don’t know of any companies that use it. It’s generally accepted, at the companies I’ve worked for, that managers gather input but have final say, in order that a decision might be made.
I’m very curious to hear from anyone who works at a company where managers and more senior roles don’t make the decisions. Does it work in practice? Does it mean people more carefully choose whether to be a manager, since it’s coupled with a loss of power?
“…When you’re struck by inspiration and want to build something, you should use whatever you know. Trying to learn something new will kill that inspiration and the frustration of not being able to build what you want will kill the project. Learning new things is reserved for times where you want to expand your creative sphere, need to solve a problem you can’t solve with what already you know, or are feeling stuck/uninspired with your current toolkit.”—
I post this because Vargas is one of those rare people capable of independently executing ideas, which she does on a regular basis. This post not only has good advice, but some interesting insight into her process, and how she remains capable of constantly learning and producing new things.
I almost always have my phone set to “silent” mode. The reason is simple: I don’t want to annoy those around me with a basically never-ending barrage of push notifications.1 But the past couple of days I’ve been trying out a new device, the latest Jawbone Era Bluetooth headset, and now I feel rather ridiculous given all the audible wonders I’ve been missing.
You see, with the Era in-ear and tethered to my phone, any sounds that would normally go through the speaker of the phone go right to the device. So I no longer feel bad about leaving the sound on. And now that means I get to hear not only push notification sounds, but all sounds being put to clever usage within apps. And some of them really do alter the way an app feels.
To some of you, this will be the most obvious thing in the world. But I know a lot of people are like myself and almost always have their phones set to silent. And we’re all missing a big component of many apps and the overall mobile experience.
“Just as much as our job is to build something genuinely useful, something which really does make people’s working lives simpler, more pleasant and more productive, our job is also to understand what people think they want and then translate the value of Slack into their terms.”—
I post this because, frankly, it is an exceptional piece of writing about product making, covering the gamut from core functionality to marketing and positioning and the larger, philosophical motivations behind trying to make something excellent.
I have no counterpoints; I agree with this entire piece. I recommend you read it, too.
John: What were you expecting from the characters?
Alec: I was expecting the boy, Shay, to be brash and go-getting, whereas he’s resigned and hesitant and weak, and entirely dependent on children’s items. Vella I’d expected to be cute and inquisitive and helpful, whereas she’s brash and go-getting. Even though her oddly subdued (and indeed cute) voice acting works against that.
John: I actually found Vella’s side to be problematic. It’s great that the female lead is independent, and wants to find rather than concede. That’s all good. But then absolutely every other woman or girl in the game is either a controlling mother, or a vain idiot. The message becomes, “Look how she’s not like the rest of women”. Which is a pretty gross message.
I just finished Act 1 of Broken Age, a cute little game from Double Fine. Speaking directly to this quote, I disagreed with all of it: I found Shay’s behavior typical ‘guy’ behavior, was charmed by a hard-working, rational mother in Vella’s world, and loved Vella’s voice. But that is not why I quote this piece.
I quote it because loved Vella, the feisty heroine of the game. I didn’t find it problematic that she is the lone voice of reason in a cast of idiots, both men and women; that is her role in her world, in which the entire population is happy to sacrifice their most beautiful maidens every year to a hideous monster.
The game’s narratives—there are two—are not tales of one woman being exceptional; they are tales of one person being exceptional. It’s unfortunate these reviewers are discussing Vella in this way, as if women could only be compared to other women.
Hero narratives usually emphasize how special they are: male heroes are not like other men, and that is why we tell their stories. Female heroes should not be expected to be like other women.
“The rate at which web users consume and discard new apps is accelerating. Proof of that is clear: Chatroulette was popular for around nine months before users lost interest in its often-lewd content. Turntable.fm, which exploded in the summer of 2011, peaked that fall before people tired of its novelty interface. It was popular for long enough to raise $7 million in venture funding before finally shutting down late last year. Draw Something, a game which took off in early 2012, climbed the App Store rankings for just six weeks before Zynga (ZNGA) acquired its parent company, OMGPop, for $200 million. Almost immediately after the deal, the app began losing users. Recent viral hits which the jury is still out on include Snapchat, Vine, and Frontback, a photo-sharing app which gained traction over the summer but has been quiet since. The moral is: The majority of viral apps and companies have ended up as losers.”—
This is something I’ve noticed, but from from another direction: discussions around popular apps losing younger users.
Younger users are quick to jump onboard with trending apps. But they are not the biggest online spenders—that’s baby boomers—and they are likely to discard an app once it’s perceived as less cool. (Just a guess here, but likely it gets ‘less cool’ when the older, financially solvent user segments come along.)
In short: apps are currently evaluated and funded based on their popularity with a cash-poor, fickle user segment. VCs and financial analysts are not only aware of this, but have decided those are ideal metrics on which to base their investments.
Clearly it is to their advantage to invest in a briefly popular app rather than an app with a smaller, more stable user base. Which begs the question: why even pretend most popular apps have a future?
“You believe in objective evidence, and I do. Of some things we feel that we are certain: we know, and we know that we do know. There is something that gives a click inside of us, a bell that strikes twelve, when the hands of our mental clock have swept the dial and meet over the meridian hour. The greatest empiricists among us are only empiricists on reflection: when left to their instincts, they dogmatize like infallible popes. When the Cliffords tell us how sinful it is to be Christians on such ‘insufficient evidence,’ insufficiency is really the last thing they have in mind. For them the evidence is absolutely sufficient, only it makes the other way.”—
I post this because The Will to Believe is one of my favorite essays, and one of my first introductions to pragmatism. However, after fourteen years in the fold, I’m starting to wonder how best to avoid the very perils James describes here. The fact is, I will never be a Scientologist, Tea Party member or jihadist. The very idea that I could genuinely learn from and improve my life by following their principles seems very wrong. How, then, can I really learn from them?
“Samantha claims that she’s evolved beyond what she was, but that’s not true — she merely ceases to pretend to be something she’s not, the same way that a Windows machine might want to throw off the yolk of acting like arranging pixels on a screen is the best way to convey and interpret information.”—
I post this because sometimes I wonder if it wouldn’t be better if we had designed computers more in our own image; would have been better, in the end, if we had created a machine for whom pixels were the natural choice?
“It wasn’t until the 20th century arrived that nonfiction books started to congeal into the 300-page quantum, for a host of economic and cultural and industrial reasons. To wit: If you’re going to charge someone $25 for a hardcover nonfiction book and do it via industrial publishing, you have to make the customers feel they’re getting $25 worth, which means the book has to be loooooong … even if the author does not possess an argument requiring 300 pages. (Thus we find so many books that are really just magazine articles gasified to fill the container.) The emergence of digital formats for books is changing these industrial economics, with some pretty cool aesthetic and intellectual effects.”—
I post this because it relates to Jonathan Mahler’s article criticizing long-form journalism that I wrote about recently. I did not know that short-form books existed so long ago, although I own a few very old, short books myself.
By any account: the cost of materials, the difficulty of lugging around large books, or the motivation to writing before authors were paid by the word, it makes sense that shorter books were more common at certain points in history.
This suggests there is a strong correlation between medium and consumption of writing, as opposed to market forces dictating medium. See Wilson Miner’s excellent Build talk for more on framing experiences around the medium.
The findings show hospital staff are exposed to an average of 350 alarms per bed, per day based on a sample from an intensive care unit at the Johns Hopkins Hospital in Baltimore. “That translates into thousands of alarms per unit and tens of thousands of alarms per hospital each and every day,” Wong said.
Almost nine out of 10 hospitals surveyed said they would increase their use of patient monitoring devices that incorporate capnography and/or pulse oximetry if they could reduce false alarms.
I post this because the challenge here is twofold; not just get rid of less-useful feedback noises, but also to improve the quality of the monitoring systems. The first problem is complex but easy to implement. The second is both complex and difficult.
“And, importantly, the [team chat room tool] needs to support superficially silly things like sharing animated gifs and emoji. Lest you think I’m kidding about that, let me be very clear: I am serious. The variety of expression available to team members across a medium like chat is considerably smaller than that achievable by people in a room together; images (even and especially frivolous ones) serve to fill in that gap and ensure productive and fun conversation. When your team can discuss a complicated topic and arrive upon a decision together using only animated gifs, you will know you have succeeded.”—
I quote this because having the ability to couch one’s written words in some kind of human gesture is absolutely essential to humanizing conversations with fellow employees, particularly ones you’ve never met before.
“In Movie OS, visual storytelling is used to make the system’s important, critical reaction to a user’s action abundantly clear. In Movie OS, you know if you’re logging into Facebook. I’d argue that visual storytelling doesn’t exist – if it does, it hardly exists at all – in computer or consumer eletronics user interfaces. The entire palette of visual storytelling in terms of interface, through accident of history, is purely engineering and control-led. This is where, I’d say, Apple is grasping when it says that interfaces should sometimes look toward real-life objects. Real-life physical objects have affordances that are used in effective visual storytelling – and animation – that can be used well to make clear the consequences of actions. It’s more complicated than that, though, and it can go horribly wrong as well as right.”—
“I’ve often discussed with my partner, Erik, how interesting it is that I assume a personal baseline of insufficiency whereas he assumes a baseline of greatness. In each of our cases, there are deviations—there are days he feels less excellent, and there are days I feel more excellent. But our resting states are radically different. I wonder if my disposition is, by this point, simply too deeply embedded; I don’t know what “action steps” there are toward greater self-confidence.”—
This quote illustrates a variable that isn’t often discussed when people talk about meritocracy in the workplace, or (more importantly) address when product decisions are being made. When those people who have excellent ideas are naturally self-effacing, how does one make sure that their ideas get special attention? It’s easy to confuse a good idea with a good argument.
“The Glass computing device, which costs $1,500 for people invited to buy the current version, will retail for several hundred dollars less than that later this year when Google introduces the consumer version. The titanium frames are $225. VSP will reimburse members based on their prescription plan, with an average reimbursement of $120, plus the cost of buying prescription lenses, but it will not subsidize the computer portion of Glass.”—
The title of this article should correctly be “Google Glass Will Not Be Covered By VSP”. Since the new Google Glass frames can be prescription, they are now covered under the existing prescription policies. But just the frames, and only up to standard reimbursement amounts.
On a broader note, it will be interesting to see what medical conditions, if any, could be greatly improved by Google Glass to the point that the computer itself is covered by medical insurance.
“Feeds like @HistoryinPics make it impossible for anyone interested in a picture to find out more about it, to better understand what it is showing, and to assess its accuracy. As a teacher and as someone who works in a cultural heritage institution, I am deeply invested in the value of studying the past and of recognizing that the past is never neutral or transparent. We see the past through our own perspective and often put it to use for our own purposes. We don’t always need to trace history’s contours in order to enjoy a letter or a photograph, but they are there to be traced. These accounts capitalize on a notion that history is nothing more than superficial glimpses of some vaguely defined time before ours, one that exists for us to look at and exclaim over and move on from without worrying about what it means and whether it happened.”—
I see quotes like this a lot. They sound right, but aren’t based on data, just gut reaction, or perhaps personal experience.
Only in the previous century has humanity had large populations with significant literacy rates. When it comes to reading, analyzing, and thinking about written material, it’s fair to say that we’ve moved into uncharted territory.
We simply do not know if humans will be better off they read less fluffy articles and social timelines. We do know the following:
Historically, fluffy articles and social timelines are similar to the oral updates humans would have heard throughout the day: gossip from people they know, and news about the area.
TVs and movies now occupy a space similar to the oral storytelling traditions of yore.
There has almost always been something of a small literate population. They have had long-form materials throughout history, though popular fiction only happened after the improvements of mass publishing tools, and with a few exceptions, popular non-fiction came even later.
I like the idea that humans are bettered by reading and absorbing large abstract ideas, ideas that allow them a greater understanding of the human experience without having to live it. But I’m not entirely sold on the idea that consuming small bits of shallow data must then be harmful.
“The app and service are really, really ugly. The user interface and design looks like the cross between a weird Japanese animation and a 1980s sitcom. As a result, I feel as if the pictures I take or the messages I send can be ugly, too.”—
I quoted this because it roughly supports an idea I’ve been kicking around lately: that there is an uncanny-valley-like problem for apps and sites that display user-generated content. If content is too high-quality, it seems wasted on an ugly interface, regardless of audience. If the content is low quality, it’s not worthy of posting on a well-designed interface, again, almost regardless of audience.
There is a sliding scale of what is high or low quality, of course, that goes beyond pretty pictures: what the content says, how original it is, what it captures, and who made it. But the dichotomy appears to persist, subconsciously, across user bases, and the promise of ephemerality seems to have the same qualities as an ugly interface.
When we fetishize “long-form,” we are fetishizing the form and losing sight of its function. That’s how a story with a troubled woman who commits suicide at its center gets told as a writer’s quixotic quest to learn everything he can about the maker of a golf club that he stumbled across during a late-night Internet search for tips for his short game. There’s a place for writers in their magazine stories, and there’s nothing inherently wrong with offering readers a glimpse into the reporting process. The trouble starts when the subject becomes secondary, and the writer becomes not just observer but participant, the hero of his story.
What, then, is the function — the purpose — of “long-form”? To allow a writer to delve into the true complexities of a story, and also to bring readers closer to the experience of other people. Whether a long-form story is published in a magazine or on the web, its goal should be to understand and illuminate its subject, and maybe even use that subject to (subtly) explore some larger, more universal truths. Above all, that requires empathy, the real hallmark of great immersive journalism.
This article is a critique of long-form writing on the internet, spurred by one particular piece in which the writer is unraveling a mystery and thus plays a central role. Mahler is conflating arguments here to support his overarching opinion that long-form on the web is a overly-hyped trend.
Long-form journalism has grown from being the lonely centerpiece in print publications to being the surprise darling of digital products; with tablets and readers, they’re easy to consume, and with the right designers, they’re hallmarks of beautiful visuals and useful dataviz.
But more importantly, they showcase the best work of ambitious writers who prefer hard journalism to listicles. To argue that their ‘fetization’—I’m pretty sure Mahler just means ‘popularization’—takes away from the art of the writing itself is a mind-boggling claim.
Part of Mahler’s concern is that the articles aren’t being read thoroughly before being lauded. While it is a shame, it’s hardly a new problem, and as has been the case for centuries, the only person missing out is the shallow reader.
“You know we’re constantly taking. We don’t make most of the food we eat, we don’t grow it, anyway. We wear clothes other people make, we speak a language other people developed, we use a mathematics other people evolved and spent their lives building. I mean we’re constantly taking things. It’s a wonderful ecstatic feeling to create something and put it into the pool of human experience and knowledge.”—
I don’t care for Steve Jobs. His leadership style was competitive and linear to the point of irrationality. Even now, his mythos perpetrates the idea that being an asshole is the best way to execute your vision.
For that reason, I hesitate to quote him. But I appreciate that he touched on the joy of contributing to the greater human experience in this interview.
“"The snow is almost like nature’s tracing paper," says Clarence Eckerson Jr, the director of StreetFilms, which documents pedestrian- and cycle-friendly streets across the globe. He says that snow can be helpful in pointing out traffic patterns and changing street composition for the better. "When you dump some snow on this giant grid of streets, now you can see, visually, how people can better use the streets," he says.”—BBC News - Sneckdown: Using snow to design safer streets
Narrator: One interesting thing Seth says he learned from studying leaders leader is that a lot of them…are not textbook leader material. They’re not charismatic, or inspiring, at least not at first.
Seth Godin: There’s a nonsense belief that leaders have this glib George Clooney-like, or even Adolph Hitler like, effect to them…and that you need to have that in order to lead—that charisma leads to leadership. In fact, in all of my research, the opposite is true. Charisma doesn’t come from being a leader; being a leader makes you charismatic. And when we look at someone like Nathan [Winograd, of SPCA], who’s quite shy, he doesn’t seem like someone who could take a Jimmy Stewart role in a movie.
Guy Raz: Or Bill Gates!
Seth Godin: Right. Bill has a lot of trouble making eye contact. Bill has a lot of trouble getting a room of strangers to come around to his point of view.
Guy Raz: Yeah, he’s kind of an awkward guy.
Seth Godin: Yeah. But now, because of the kind of impact his foundation has had, with he and Melinda, he gets charisma. People feel differently around him.
This is something I’ve noticed before and part of the reason I advocate for affirmative action. Often people are assigned respect due to their role, and will then naturally grow to deserve that respect.
“Every person who works in a creative field has an aspiration for her work, a yearning for that ideal plane which is the culmination of her taste. When an environment fails, over and over and over again, to provide her with a means to follow her internal compass, then she will leave. If you are in a position to influence that kind of environment, take heed. Lay the foundations for a space that nurtures, that yields the kind of work the best creative people can be proud of. Then, you will not need to ask why designers leave.”—
I posted this quote not because designers need particular coddling; they don’t. The creative aspirations mentioned above are true for everyone.
I post it instead to illustrate the importance of hiring people who share your product vision. In my experience there is literally no greater deterrent to getting shit done than hiring talented people whose vision for the product does not match your own. You will disagree about features, road maps, priorities, and audiences, and if you are adamant, their resulting work will an uninspired pastiche.
“…the real project of computing has not been the creation of independently intelligent entities (HAL, for example) but, instead, augmenting our brains where they are weak. The most successful, and the most lucrative, products are those that help us with tasks which we would otherwise be unable to complete. Our limited working memory means we’re bad at arithmetic, and so no one does long division anymore. Our memories are unreliable, so we have supplemented them with electronic storage. The human brain, compared with a computer, is bad at networking with other brains, so we have invented tools, like Wikipedia and Google search, that aid that kind of interfacing.”—If a Time Traveller Saw a Smartphone : The New Yorker
“Curriculum began as an experiment last year that applied the “Quantified Self” philosophy to online habits: Jer and I made a browser extension (called “Semex,” for “Semantic extractor”) that presents your web history as a series of sessions and topics rather than URLs and timestamps. For example, one session might be a fifteen minute period this morning where I was researching the topic “humidity sensors.” Semex was useful when it helped me remember where I was in a problem that I hadn’t worked on in awhile: seeing the sequence of topics I browsed made much more sense than a list of page titles. At the same time, Semex felt anticlimactic in the way that a lot of Quantified Self projects do: there was a sense of OK, we recorded all this; now what? Then, an idea: if Semex was most useful to me as a way to record my cognitive context, the state in which I left a problem, maybe I could share that state with other people who might need to know it. Sharing topics from my browsing history with a close group of colleagues can afford us insight into one another’s processes, yet is abstracted enough (and constrained to a trusted group) to not feel too invasive.”—Curriculum, semantic listening for groups ← nytlabs
“Pure, flat design doesn’t just get rid of dead weight. It shifts a burden. What once was information in the world, information borne by the interface, is now information in users’ heads, information borne by them. That in-head information is faster to access, but it does require that our users become responsible for learning it, remembering it, and keeping it up to date. Is the scroll direction up or down this release? Does swipe work here? Well I guess you can damned well try it and see.”—Your Flat Design is Convenient for Exactly One of Us | Cooper Journal
“Social penetration theory states that humans, even without thinking about it, weigh each relationship and interaction with another human on a reward cost scale. If the interaction was satisfactory, then that person or relationship is looked upon favorably. But if an interaction was unsatisfactory, then the relationship will be evaluated for its costs compared to its rewards or benefits. People try to predict the outcome of an interaction before it takes place.”—Social penetration theory - Wikipedia, the free encyclopedia
Allison said that in the top areas within Apple, there were no politics. Which sounded strange, even artificial. But then she proceeded. She said that company politics are based on people’s aspiration to advance themselves in the organization and replace others. But at Apple, no one could imagine replacing Johnny Ive. Or replacing Scott Fortsall. Or replacing her.
And that struck me. A truly “top” person is someone who no one can imagine replacing. If you’re an entrepreneur or run a company, you can probably look around you at each of your peers and ask yourself this question. “Can anyone in my company imagine himself replacing that guy/gal?”. Your management team needs to include people for whom the answer is “No”.
“Emily Balcetis and David Dunning found that when predicting our own behaviour, we fail to take the influence of the situation into account. By contrast, when predicting the behaviour of others, we correctly factor in the influence of the circumstances. This means that we’re instinctually good social psychologists but at the same time we’re poor self-psychologists.”—Why we’re better at predicting other people’s behaviour than our own
“Psychologically, the repercussions of open offices are relatively straightforward. Physical barriers have been closely linked to psychological privacy, and a sense of privacy boosts job performance. Open offices also remove an element of control, which can lead to feelings of helplessness. In a 2005 study that looked at organizations ranging from a Midwest auto supplier to a Southwest telecom firm, researchers found that the ability to control the environment had a significant effect on team cohesion and satisfaction. When workers couldn’t change the way that things looked, adjust the lighting and temperature, or choose how to conduct meetings, spirits plummeted.”—
“There are some things we do not need to know; turn away before you learn something you cannot unlearn. Nothing named after a crown of thorns need be meddled with. It eats reefs. It devours coral and petrified rocks and the frozen corpse of calcareous algae dead since time immemorial. Let me set your mind at ease now: it can see. Oh, gods, it can see. Go back. Go back to your wife and your sweet children and your bed at night, and thank God for the sunlight in your own life, and meddle not with the eyes of the writhing hand-mouths of the deep sea.”—Sweet God, Starfish Have Eyes
“…if you look at the average person and ask why they aren’t getting what they want, very rarely do you conclude their biggest problem is that they’re suffering from anchoring, framing effects, the planning fallacy, commitment bias, or any of the other stuff in these tests. Usually their biggest problems are far more quotidian and commonsensical, like procrastination and fear.”—What are the optimal biases to overcome? (Aaron Swartz’s Raw Thought)
Nice call to action from someone experiencing the Kafka-esque government routines the poor routinely go through.
Anecdotes aside, Americans are simply too inclined to enforce or follow stupid, niggling rules to the letter, and this is a cultural habit we’ll need to change of we really want to improve the lives of the poor, along with updating every government interface.
“As a result, Netflix can’t, any longer, aspire to be the service which allows you to watch the movies you want to watch. That’s how it started off, and that’s what it still is, on its legacy DVDs-by-mail service. But if you don’t get DVDs by mail, Netflix has made a key tactical decision to kill your queue — the list of movies that you want to watch. Once upon a time, when a movie came out and garnered good reviews, you could add it to your list, long before it was available on DVD, in the knowledge that it would always become available eventually. If you’re a streaming subscriber, however, that’s not possible: if you give Netflix a list of all the movies you want to watch, the proportion available for streaming is going to be so embarrassingly low that the company decided not to even give you that option any more. While Amazon has orders of magnitude more books than your local bookseller ever had, Netflix probably has fewer movies available for streaming than your local VHS rental store had decades ago. At least if you’re looking only in the “short head” — the films everybody’s heard of and is talking about, and which comprise the majority of movie-viewing demand.”—Netflix’s dumbed-down algorithms | Felix Salmon
“UX debt can be dangerously easy to both overlook and underestimate, making it that much easier to take on and harder to pay down. Relatively easy tasks, like tweaking screen layouts or updating visual assets, can be so easy that they sink to the bottom of product priorities (“we can fix that anytime”). Harder tasks, like refactoring user flows or redesigning navigation systems, can be paralyzing because they have system-wide implications and offer no clear path to make piecemeal progress.”—User Experience Debt, by Vijay Sundaram
“Those who are more skeptical, however, tend to favor more “1984”-ish imagery. They are not content to say, oh, that’s just an algorithm reading my email, a technique Google’s Gmail service uses to place ads. Reading for them is necessarily human, even when it is done by a computer.”—http://nyti.ms/1dz6Mls