To anyone reading this, be aware of how grabbing your phone in the morning to read headlines or Medium digest can turn into 30 minutes or even an hour. Think about how checking e-mails can take you completely out of context and badly dilute the quality of your work. Consider how the amazing small interactions with other people that make life beautiful can be destroyed by even a glance at the phone.
Monitoring phone usage and actively cutting it down I really feel has improved my quality of life. Now that Apple has built in tools to do this, I hope more people will treat this as seriously as they treat exercise, nutrition, etc.
Is it the same for news reading, or do you just read less news these days?
Even if reducing mobile screen time leads to some increase in desktop screen time, it could still be a good thing for social conventions and quality face to face interactions. I'm just curious about whether we're reducing screen time overall, or just shifting it from mobile/interrupted to desktop/purposeful.
I find that if I end up just playing an online desktop game or putting on the Oculus, the time I spend away from my phone is higher quality and better for my well being overall, even if it replaces something "productive" like answering work emails on my giant (and yet oh so tiny) Note8.
I agree with this completely, but the converse is also true. Sometimes you need to switch context during the day, and scanning headlines and reading emails helps me do that (I'm doing it right now!).
The best context switching strategies I've found are eating and meditation, but those aren't always feasible or socially acceptable, so browsing headlines is a good stand-in.
https://www.theverge.com/2018/6/5/17426922/apple-digital-hea...
This issue of "falling down the LCD well" has been at the forefront of electronic music for a while. Synthesizers were cool in the 70's because they had knobs, and any abstract logic involved was done by you, so you had to be into it.
Then the 80's and 90's came along and we got stuff like the DX7 and innumerable "workstation keyboards" that were little more than a tiny display and two or three buttons. Maybe a jog wheel if you were lucky.
These were unambiguously cleaner from a design perspective, but what people began to realize is that the screen was too much of an abstraction. Then of course DAWs came along and even the synthesizers themselves moved into the computer physically.
Music, like life in general, is visceral, and the screen is not. Musicians became frustrated with the lack of physicality.
The modular synthesizer approach of the 70's has made a HUGE resurgence with the eurorack standard.
Not everyone, but LOTS of people actually PREFER a gigantic mess of tangled wires with physical plugs and knobs to a sterile pure-logic implementation on the computer that can do all the same stuff cheaper and in a more reproducible way.
I suspect we'll see a similar sort of resurgence of physicality across every product that has been absorbed into the computer screen.
- was taking up half my time.
And that was after the learning curve of trying to adapt to the melange of hardware, interface, and software options.
That time and energy was taken away from creativity and experimenting with music to improve its richness,interest, and originality.
Modulars can be more expressive in expert hands (seldom the case). But making good music requires a different kind of expertise. And I hear that difference - and the cost of all the lost creative energy - on the radio every day.
First off with speech everyone can tell what you are doing and it is obtrusive - look at the old DMV signs against cellphones back from when they could literally just call.
Second it is just plain worse as an interface - just try to use a phone tree. People have dutifully ignored phone tree based answering machines existence except for the visually impaired who frankly lack options and must use what everyone else would consider useless.
Third there is less to do with it and thus less reason to get involved with the frontier. Again the same trap as the smart watch. People asked what can you do with it and the iWatch flopped despite Apple trendiness.
Direct thought reading might work better but that is in the easier said than done category. They can't even make an acceptably accurate non-invasive glucose meter so it is very unlikely to come out in consumer goods.
Google glass interestingly also largely flopped for several reasons despite heading in the opposite direction and provoked sheer irrational hatred above and beyond all other carriable or constantly recording cameras. AR sounds nice but they also need to factor in the significant glasses wearing population. Also apparently had short battery life for something to be worn as a HUD.
Noting where things can go wrong is easy compared to figuring out where to go in the future even with caveats like "10 years directed research lead time". I think the market has matured for personal electronics now until they can offer "magic" again. VR is neat but a niche in chicken and egg situation.
Now more and more people have a voice-activated assistant not only in certain rooms of their home or in their pocket, but also on their wrist. It might seem to be a small thing not to have to reach into your pocket to invoke a voice assistant, but that small amount of time saved really adds up, especially if you're doing something with your hands like cooking or holding a baby, which some people still do!
There’s a massive wave of geriatrics coming, and various dysarthrias (be it as significant as a stroke or as minor as edentulism or poorly fitting dentures) are extraordinarily common.
Any voice tech that can’t handle speech impediments isn’t ready for the market - it’s just too big a demographic chunk.
Information and the psychological dependency it creates, whether it be a message from someone we are attracted to or the stimulating audiovisual response from a slot machine, used to be tied to slow physical mediums like snail mail and restricted by location. In that context, the smartphone has created a whole new era of crazy that we are completely unprepared to deal with. Coupled with the rapid pace of development that didn't leave anyone enough time to adopt to the internet before it became ubiquitous, they have provided a perfect delivery mechanism for psychological dependency optimized on a massive scale, long before we've had time to adapt.
With AR and VR, my money is on more screen time in the future not less.
Tomorrow, when new interactions will exist, we also will have ads. On the watch, during Siri conversations, ...
Same as Facebook at it's beginning, they will wait until we are used to it before beginning to do it. I'm looking forward to the "vocal assistant" version of uBlockOrigin ;)
Apple's essential point of differentiation with Android is the fact that they make money on hardware. Google give away their mobile OS to funnel more attention into their attention monetisation machine. Every step that Apple takes to protect user privacy deepens their moat, because Google only make money by harvesting data and monetising attention.
Google can't compete on privacy, so it's very much in Apple's interests to push that issue as hard as possible. Their decision to block ad trackers by default in Safari was very smart; deciding to insert ads into Siri or WatchOS would be indescribably stupid.
Does anyone else think this kind of writing is unnecessarily hyperbolic? I'm so tired of reading articles that resemble the next Michael Bay script. The core of this article may have value, but I can't even get to it since it's drenched in distracting click-bait sauce.
This isn't story time, New York Times. Treat me like an adult.
That is so depressing, but thinking about it myself, 11 is probably a minimum. I have to change careers to selling ice creams at the beach or something.
You will learn to loathe ice cream, the beach, and even the nicest people you could see there. (Quoth Kierkegaard: https://www.goodreads.com/quotes/7141047-marry-and-you-will-...)
I miss working at the park. And I'm pretty sure it's not just nostalgia. I liked being outside. I liked working with the people there.
I've been doing pretty well at work lately, but I don't seem to get any happier with any type of promotion I receive. It's just more of a challenge, which I like, but it doesn't make me any happier.
I make more money now, and I just keep working hoping that I'll get to a point where I'm happy with my job, where I find something I like, but it doesn't really seem to be happening.
---
I made a throwaway for this because after I typed it up, I realized it's just me lamenting about growing up.. but I figure it's worth saying anyway.
My job now is essentially solving puzzles and stopping problems before they happen. And I have time to write something non-work related while I'm at work.
Sometimes though I wonder if it really is better. My current job can be frustrating. Sometimes I come up with solutions that make me feel incompetent. It took too long and the answer is inelegant.
When I was working at that restaurant all I had to do was put forth effort. It was physical and stressful to be sure but it never made me feel bad about myself.
Not something you really want to be doing for 30+ years though, but to be fair, I'm not sure the prospect of staring at this screen for 30+ is especially appealing either...
1. Can you do it for 30+ years?
2. What do you do if you get sick/injured?
3. What will you do after 30 years of working for a bit more than minimum wage? I don't think you can save a lot for a rainy day...
The main problem with screens (and the reason why I try not to do it as much as I used to) are 1) If you're interacting with a screen all the time, you're probably not interacting with someone in person, and everyone's ability to read body language and subtle cues probably goes way down when they do interact in person. 2) The current screens emit light that at the very least probably harm our biorhythms, but may harm us in other ways as well. Meanwhile the Kindle isn't much different than looking at a book or a sign. and 3) Screens provide a limited window into another world, and at least right now, that window has zero depth (and doesn't engage our other senses either while we're at it). We don't get to take advantage of having two eyes and seeing depth while we're on a screen. This may change once VR becomes more and more viable and realistic.
I do try to do things more analog myself now though. Writing or designing on paper while outside in good weather is much preferable to inside on a screen. It's also why I've gravitated to more offscreen hobbies, such as board games and board game design, as opposed to staying on screens and programming games and apps in my off hours. Screens can let you get those things done faster, though (i.e. it's much faster and less straining on my hands to type than to write all of my thoughts).
We will at some point have colored high-refreshrate e-ink monitors, I'm convinced of it.
I'm looking forward to e-ink, paper-like laptops and phones that work well in the sun!
Most of us are already choosing to give into screen time, what happens when we no longer have an easy choice?
You might enjoy Vernor Vinge’s novel Rainbows End, which offers a version of a future where augmented reality is so pervasive that people stop really caring about the difference.
I must say that my own takeaway from the novel is that that idyllic AR future is only going to work out if, as Vinge assumes, driverless cars also become a reality soon. We already have people walking into dangerous traffic situations because they are looking down at their phones, imagine the chaos if people start moving into dangerous paths because they are chasing something shown by the AR view.
Are you gonna have games that mess with your mind when you’re away from them?
Bret Victor wrote a piece [0] lamenting the convergence on screens as the interaction design paradigm almost 7 years ago. Bret explains why screens are limiting interaction design through examples centered around the human body.
Ironically, this NYT piece gives the impression that a human being is a floating head and fingers i.e. an AR/VR avatar that they seem to loathe. I hope the future of computing isn't just the ability to check my calendar without a screen while walking. I want to use my body in tandem with computation. I don't have a Killer App for this interaction paradigm, but I found this paper by Scott Klemmer, Björn Hartmann, and Leila Takayama useful for thinking about it [1].
[0] http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi... [1] https://hci.stanford.edu/publications/2006/HowBodiesMatter-D...
A world where we begin to move off of visual interfaces will be awkward. While humans are good at absorbing conversational audio, they mentally filter most of it out to distill it down to its essential elements, and knowing what's essential may not even be known ahead of time. We'll direct voice-outputting interfaces to repeat things often, and they must be smart enough to accurately determine the context of our inquiry.
Voice output is often paired with voice input, but voice propagates well in public, leaking information to everything in range. Devices that capture speech-like input in a private way are not yet widespread. Meanwhile, structured command input through voice is awkward, and natural language processing doesn't sound natural yet. It's complex to implement and the computer frequently encounters a situation it doesn't yet understand, which is the most discouraging kind of interaction one can have with a computing platform. Factors like these highlight that audio-based interfaces are rarely programmed to be discoverable, and even if they were, exchanging that information over audio is less efficient than doing so visually.
New research into interface design is needed to address many of the shortcomings of current attempts to de-emphasize screens.
There was a grotesque drawing of an eyeball attached to an ear along with a couple of fingers. It’s not entirely inaccurate: most of our interactions with computers are with our fingers, eyes, and ears. But now that microcontrollers like the Arduino and SBCs like the Raspberry Pi are so cheap and accessible, we can begin to look at different ways to interact with computers, through sensors instead of touchscreens and keyboards.
In a few decades, we may see a shift in our human-computer interfaces as lasting and profound as the leap from mainframe terminals to personal computers.
No 3d, could just be text with voice control for what to display. The eyeglasses would be normal looking eyeglasses, maybe a heavier frame to house the needed electronics. Or maybe the ear loop has the extra stuff but not thick like a hearing aid.
Anyhow, what I think the article intended was the next revolution being in perfecting voice commands to apps. This is obviously for consumers not computer geeks that work on computers all day.
I’m personally really excited about that potential. I would love to pivot my career away from teaching to building AR workflow mediation for teachers. I would even be pleased to only carry a watch and headphones to fulfill the majority of my computer-related tasks.
That said, I do fear Hyper-Reality[1] and such a persistent, obligatory mediation of our lived experience.