This little hobby deserved its very own domain. So I’ve migrated everything over to http://futurefilter.com, and will be building my futuristic empire there. See you!
Sean O’Kane, for The Verge:
Astronaut Terry Virts uses the action camera to capture a stunning view of Earth passing by, and in the second one we get a strapped-on view of what it looks like to navigate the underbelly of the International Space Station.
And it’s pretty stunning to see in good quality video.
Christopher Hooton, for The independent:
Arthur Caplan, director of medical ethics at New York University’s Langone Medical Centre, who described Dr Canavero as “nuts”, believes that the bodies of head transplant patients “would end up being overwhelmed with different pathways and chemistry than they are used to and they’d go crazy.”
Or just crazy enough to work!
Wait, no, that didn’t make any sense. Sorry, there’s no way that this doesn’t sound doomed.
Then again, the subject, Valery Spiridonov, already suffers from the most severe type of spinal muscular atrophy. It’s amazing he’s even lived to be 30, given that “affected children never sit or stand and usually die before the age of 2 years.”
“Am I afraid? Yes, of course I am. But it is not just very scary, but also very interesting. But you have to understand that I don’t really have many choices. If I don’t try this chance my fate will be very sad. With every year my state is getting worse.”
I’ll be watching, whether or not I want to.
Medgadget reports on research from UCLA’s Biomechatronics Lab:
While tactile sensors have been used before in order to create a rudimentary sense of touch, the UCLA team is taking this technology a step further by introducing smart algorithms to process what the sensors are feeling.
Specifically, the researchers are building a “language of touch” that can be used to help humans to intuitively operate robotic devices.
A lot of ink has been spilled lately about haptic feedback, but not so much about tactile perception. (They’re both aspects of somatic senses, but haptics relates to the perception of forces during movement.) Our robots and prosthetics will need to sense all of these things, and be able to provide feedback to the fleshy creatures on the other end.
Germain Lussier, for /Film:
The director is finishing up Tomorrowland, the film he chose to do over Star Wars: The Force Awakens, and has started to look towards his next project. Thankfully for him, his next project was kind of revealed over a year ago. That’s when Disney CEO Bob Iger told investors Pixar was beginning work on The Incredibles 2 and, now that Bird is free again, it seems he’s begun work on it.
“I’m just staring to write it, so we’ll see what happens.”
Yes please! I’m a big Bird fan. Tomorrowland looks really interesting, too, but The Incredibles is easily among my favorite Pixar joints.
Irene Klotz, for ABC Science:
Using advanced computer modelling, Mastrobuono-Battisti and colleagues ran dozens of simulations of later-stage planet formation, each time starting with 85 to 90 planetary embryos and 1,000 to 2,000 planetesimals extending from about halfway between the orbits of Mercury and Venus to within 50 million miles or so of Jupiter’s orbit.
Within 100 million to 200 million years, each simulation typically produced three to four rocky planets as a result of colliding embryos and planetesimals, the scientists found. Looking particularly at the last moon-forming impact scenarios, the scientists assessed the likelihood that Theia and Earth had the same chemical composition.
Do the Earth and Moon still carry genetic material of Theia? According to more and more accurate simulations, it’s quite possible. Interesting to ponder how things might have been different otherwise. (Also, I love the idea of planetary embryos and “planetesimals.”)
Astrophysicist Sean Raymond, for Aeon:
These planets don’t orbit stars. They wander the stars. They are free citizens of the galaxy. It might seem like the stuff of science fiction but several free-floating gas giants have been found in recent years. Our own gas giants, Jupiter and Saturn, are leashed to the Sun on well-behaved orbits, but this might not be the norm in our galaxy. One study, published in Nature in 2011, suggests that the Milky Way contains two rogue gas giants for every star. That particular study remains controversial, but most astronomers agree that rogue planets are common in our galactic neighbourhood. And for every rogue gas giant there are likely to be several rogue Earth-sized rocky worlds. There are likely tens to hundreds of billions of these planets in our galaxy.
It’s fascinating to consider that a so-called “icy rogue” might harbor some exotic form of dark-adapted life. We simply don’t know at this point. But the larger the set, the more that possible becomes probable — and there may be “tens to hundreds of billions of these planets in our galaxy” alone, says Raymond. But regardless, a nearby rogue might be useful to us for other reasons:
A rogue Earthlike planet could be among our closest galactic neighbours, and in that case colonisation could be worth the effort, because we could convert a rogue planet into a jumping-off point, a waystation in our larger effort to spread out into the galaxy.
Ian Failes, for fxguide:
Ava is clearly intended to be a robot of some kind, but Whitehurst was adamant that she not feel robotic in terms of her CG materials. “The one rule I made from the outset,” he says, “was that no-one was allowed to look at robots. You were allowed, though, to look at things like Formula One suspension or high-end bicycles. We also looked at human anatomy, of course. Ultimately she’s a machine who is supposed to move and behave exactly as a human would. All of the muscles we have in there are simplified versions of human ones, for instance.”
“Initially the back of Ava’s head and neck were not metal,” adds Whitehurst, “but that decision was made to have the character weirder to look at. One of the topics or ideas in the film is that we wanted her to look robotic. When you are presented with that visually, do I read her as a character or do I read her as a machine?”
I’ve been digging the aesthetic of this film since I saw the first preview. The design of the character is elegant, and exotic without looking over-produced. The set design is also top-notch.
Martin Enserink, for Science/AAAS:
The study, published online today in the Proceedings of the Royal Society B, shows that tall Dutch men on average have more children than their shorter counterparts, and that more of their children survive. That suggests genes that help make people tall are becoming more frequent among the Dutch, says behavioral biologist and lead author Gert Stulp of the London School of Hygiene & Tropical Medicine.
“This study drives home the message that the human population is still subject to natural selection,” says Stephen Stearns, an evolutionary biologist at Yale University who wasn’t involved in the study. “It strikes at the core of our understanding of human nature, and how malleable it is.”
I haven’t been to the Netherlands yet, but one thing I noticed as soon as I arrived at Copenhagen Airport: every single male there was 6′5″.
Kelsey Campbell-Dollaghan, for Gizmodo:
Dr. Nadine Chahine is a type designer at the foundry Monotype who focuses on the science of legibility. Dr. Bryan Reimer is a scientist at MIT’s AgeLab who researches distracted driving and the impact of in-car interfaces on drivers.
Together, they’re writing the book on how our eyes read when we’re distracted by the world around us. “There literally is the need to develop a new textbook here,” Reimer told me, after he and Chahine gave a talk in March entitled At a Glance: How Does Type Impact Your Daily Life?. “Companies have to come together and support science-infused design.”
Screens are getting larger and smaller all the time; higher in resolution, but packed with more information. The art and science of design within such restrictions is interesting to watch (ha). Especially because many aspects of the design language are still in early development, and we’ll surely be cringing at some of these nascent trends in just a few years time. (Remember skeuomorphism?)
Jason Schreier, for Kotaku:
There’s no release date or even year announced yet for the much-anticipated fourth Deus Ex game, which will be released on PS4, Xbox One, and PC. In other words, don’t expect this game until 2016. But still, it’s exciting—Human Revolution was excellent, and it sounds like the developers have been working on this next one for a very long time.
I absolutely loved the first Deus Ex, and I thought that Human Revolution made for a stellar reboot. It wasn’t perfect, but I loved being immersed in that world.
While we’re here, since this is my soapbox, I’ll go ahead and say that my biggest criticism of the first game wasn’t the boss battles, but the upgrade system. The limited resources in the first Deus Ex forced players to choose which traits to upgrade. It was a game balance mechanic that ensured that your narrative would be different across several playthroughs. The reboot abandoned that completely, and made all augments available long before the end of the story. Anyway, I have no reason to believe Square Enix will come around to my way of thinking, but I’ll be buying this game either way.
Author Joe Quirk, a “seavangelist” for The Seasteading Institute presents a brief but thought provoking case for the development of independent sea faring habitats. From the description:
Joe Quirk of the Seasteading Institute thinks floating cities will allow micro nations to compete for people — providing better life options and innovations. “Aquapreneurs,” says Quirk, can save humanity from disease, environmental harm and maybe even war.
I’ve always been drawn to artificial habitats, be they subterranean, extraterrestrial, or maritime. I hope to live long enough to see a space colony, but cities of the sea are probably going to happen first.
A prominent group of thinkers has raised the alarm that humanity would do well to heed the inherent dangers of artificial intelligence.
Lyle Cantor, on Medium:
A superinteligence (sic) whose super-goal is to calculate the decimal expansion of pi will never reason itself into benevolence. It would be quite happy to convert all the free matter and energy in the universe (including humans and our habitat) into specialized computers capable only of calculating the digits of pi. Why? Because its potential actions will be weighted and selected in the context of its utility function. If its utility function is to calculate pi, any thought of benevolence would be judged of negative utility.
A lot of the concern centers around a runaway AI focused on a single task, but with a child’s capacity for judgment, or constraint. But Cantor goes on to illustrate the struggle of a chimp in a man’s world:
We don’t hate chimps or the other animals whose habitats we are rearranging; we just see higher-value arrangements of the earth and water they need to survive. And we are only every-so-slightly smarter than chimps.
In many respects our brains are nearly identical. Yes, the average human brain is about three times the size of an average chimp’s, but we still share much of the same gross structure. And our neurons fire at about 100 times per second and communicate through saltatory conduction, just like theirs do.
In a recent comment on Edge.org, Stuart Russell — co-author of Artificial Intelligence: A Modern Approach — said, “None of this proves that AI, or gray goo, or strangelets, will be the end of the world. But there is no need for a proof, just a convincing argument pointing to a more-than-infinitesimal possibility.”
To me — a complete outsider to this — this concern of ours should just mean that we set breakpoints and interrupts, as we would with any program in development. Am I being naive?
Martin Bellander scraped the color data from 120,013 paintings — most of them produced between 1800 and 2000 — then wrote statistical software to extract color data from them.
There seems to be a reliable trend of increasingly blue paintings throughout the 20th century! Actually almost all colors seem to increase at the expense of orange. But let’s focus on the increase of blue.
Of course the changes in color might be a results of a combination of factors. One of these could of course be trends in the use of color. If we assume a smooth linear deterioration of certain colors in oil paintings, it would be possible to subtract that change and study the short term fluctuation in color use. For example the marked increase of blue at the time of the First World War, might actually reflect a true trend in color use.
Regardless of the reason(s) why blue has become more prominent, it’s fascinating to see this trend emerge from the analysis itself.
Oliver Sacks, for The New York Review of Books:
Soon after waking from the embolization—it was performed under general anesthesia—I was to be assailed by feelings of excruciating tiredness and paroxysms of sleep so abrupt they could poleaxe me in the middle of a sentence or a mouthful, or when visiting friends were talking or laughing loudly a yard away from me. Sometimes, too, delirium would seize me within seconds, even in the middle of handwriting. I felt extremely weak and inert—I would sometimes sit motionless until hoisted to my feet and walked by two helpers. While pain seemed tolerable at rest, an involuntary movement such as a sneeze or hiccup would produce an explosion, a sort of negative orgasm of pain, despite my being maintained, like all post-embolization patients, on a continuous intravenous infusion of narcotics. This massive infusion of narcotics halted all bowel activity for nearly a week, so that everything I ate—I had no appetite, but had to “take nourishment,” as the nursing staff put it—was retained inside me.
As usual, Sacks paints a picture with his words. He describes the expected lows, as well as the surprising highs. He’s not out of the woods yet, but there is room for hope. You should definitely read the article.
This one’s fascinating to me. I’ve seen several otherwise unrelated articles about LA’s parking sign redesign, and each had a negative spin. What drops my jaw is that the new signs, to me, are a vast improvement. Finally, at a glance, I can see exactly where I am right now, and where I’m not supposed to be. Looking at the comments, you’ll see the split. Some people like the new signs (with the usual quibbles about color, etc.), and others are responding as if LA had posted signs of Hitler waving an American flag.
Here’s what the controversy is about:
It’s such a clear improvement over what I’ve had to endure before that I have to imagine that there are some people who aren’t seeing the same things I’m seeing. Am I so far to one side of the visual spectrum that I need the week to be laid out into logical, contextually-accurate zones in order to interpret a sign correctly? Perhaps. Regardless, score one for visual savants!
Nicola Twilley, for The New Yorker:
Until recently, astronomers had focussed on analyzing a planet’s reflected light for evidence that its atmosphere contained oxygen or other gases that are considered to be positive indicators for the presence of life. In 2002, however, they proved for the first time that a pigment—chlorophyll, the molecule that makes plants and certain algae appear green to our eyes—could serve as a biosignature. By observing Earthshine, the sunlight that is reflected from Earth onto the surface of the moon, they were able to detect a “vegetation red edge,” a distinctive spike in the near-infrared region that is caused by chlorophyll. (To an alien equipped with infrared goggles, the faint glowing pixel that is Earth would actually have a hot-pink tinge.)
It seems quite a feat to look at a single point of light to assess its capacity for life — like looking at a novel’s word cloud and guessing its genre based on the largest noun. Still, it’s a start.
David Pierce, for Wired:
Lynch and team had to reengineer the Watch’s software twice before it was sufficiently fast. An early version of the software served you information in a timeline, flowing chronologically from top to bottom. That idea never made it off campus; the ideas that will ship on April 24 are focused on streamlining the time it takes a user to figure out whether something is worth paying attention to.
Take the feature called Short Look: You feel a pulse on your wrist, which means you’ve just received a text message. You flick your wrist up and see the words “Message from Joe.” If you put your wrist down immediately, the message stays unread and the notification goes away. If you keep your wrist up, the message is displayed on the Watch’s screen. Your level of interest in the information, as demonstrated by your reaction to it, is the only cue the Watch needs to prioritize. It’s interactions like this that the Watch team created to get your face out of your tech.
Also, I love this bit:
It has become normal for Apple employees to randomly stand during meetings because their Watch told them to.
It’s a pretty huge gamble, but Apple seems to believe it has a hit on its wrists. I’m undecided, but I do love to see the state of the art being advanced. Even if the watch itself never finds a place on my wrist, I’m fascinated by such things as their so-called taptic engine, and the thinking that went into a pared down, but information-rich interface.
Victoria Turk, for Motherboard:
Blumlein’s work included inventions needed for recording, processing and playing sound in stereo and he had around 70 patents to his name.
Dedicating the plaque, IEEE President Howard Michel explained that his work included “a ‘shuffling’ circuit to preserve directional sound, an orthogonal ‘Blumlein Pair’ of velocity microphones, recording of two orthogonal channels in a single groove, stereo disc-cutting head, and hybrid transformers to mix directional signals.”
Some of these inventions were on show from the EMI archive, including a “binaural” microphone arrangement known as the Blumlein Pair. The pair of microphones are positioned at a right-angle to pick up sound separately and so give the stereo effect.
Yet, as happens all too often, it wasn’t until well after Blumlein’s death that his work was put to widespread use, and his contribution acknowledged outside audiophile circles.
Elif Batuman, for the New Yorker:
This was my first experience of transcranial direct-current stimulation, or tDCS—a portable, cheap, low-tech procedure that involves sending a low electric current (up to two milliamps) to the brain. Research into tDCS is in its early stages. A number of studies suggest that it may improve learning, vigilance, intelligence, and working memory, as well as relieve chronic pain and the symptoms of depression, fibromyalgia, Parkinson’s, and schizophrenia.
The precise physical mechanism of tDCS remains mysterious. The electric current used is too low to cause resting neurons to fire. Instead, it seems to make neurons more or less likely to fire, by changing the electrical potential of nerve-cell membranes. In other words, although tDCS can’t create new neural activity, it can enhance or reduce existing activity.
Few claims about tDCS are free from controversy. In the past few months, Jared Horvath, a fourth-year doctoral student at the University of Melbourne, published two meta-analyses of hundreds of studies, in which he claims to have found no evidence of either physiological changes to the brain or of cognitive effects from tDCS.
And, finally, I found this bit both interesting and funny:
The implication of placebo is extremely powerful: What if the body knows, in some sense, how to heal itself, and it’s just a matter of triggering that knowledge? Schambra suspects tDCS may not merely trigger the placebo effect, as all treatments do, but actually amplify it. In other words, in a controlled tDCS study, both active and sham groups get a placebo effect, but the active group may get a bigger effect.
So, the jury is out. On the one hand, what seems to help, helps. On the other hand, we should have a better understand of what’s actually providing help, and how it works. In the meantime, it’s at least an interesting series of anecdotes.
John Timmer, for Ars Technica:
Ohio State University’s Carl Vuosalo helped show us around the CMS, but first he had to shepherd us past higher security than I’ve ever experienced. To do so, he passed through a retina-scanning security system that simultaneously checked his weight (presumably to keep someone with a disembodied eyeball from making their way past the system). I passed it solely because of Vuosalo’s ingenuity. He opened a door meant for the delivery of equipment, slipping me through as if I was a UPS shipment.
Even though the LHC was shut down, the team made safety the highest priority. They lectured on protocol; they issued me a hard hat. The greatest risk of death at the LHC, as it turns out, is suffocation. Liquid helium (120 tons of it, along with another 10,000 tons of liquid nitrogen) cool the accelerator hardware, while many parts of the giant detectors rely on liquid argon to track particles through them. Either of those will happily convert to a gas if let loose from their containers. In addition, the fire suppression system could fill the entire chamber the CMS resides in with foam in under a minute.
It’s been said before, but the facilities built to support the research is one of the wonders of the world. It’s good to get a peek behind the scenes, and I can’t help but think that the working conditions are similar to those in the Death Star.
Annie Minoff and Jared Goyette, for Public Radio International:
When Kathy Kleiman started researching the history of computer programming as an undergraduate, she came across old black-and-white photos of the people who worked on the ENIAC, the world’s first all-electronic programmable computer. But they seemed to be missing a key detail.
Both men and women were pictured posing in front of the mammoth machine — ENIAC was 8 feet tall and 80 feet long — but the men were listed in the captions while the women were not. When Kleiman asked why, the response she got was both incredibly wrong and incredibly telling.
And I love this part:
“When I went and started inquiring who were the women, I was told they were models,” Kleiman says. “But that wasn’t the case at all.”
Well, role models, maybe. I’m glad these pioneers are finally getting their due respect. The article mentions Kathy Kleiman’s documentary on the women, “The Computers,” which was the official selection of the 2014 Seattle Film Festival.
A paper published on PLOS One breaks it down:
Playing certain types of video games for a long time can improve a wide range of mental processes, from visual acuity to cognitive control.
Stands to reason, since practice makes 1up.
Dian Schaffhauser, for Campus Technology:
The researchers said they aren’t exactly sure what’s happening in the brain of gamers that differs from non-gamers, but according to Yuka Sasaki, associate professor in the Department of Cognitive, Linguistic and Psychological Sciences (CLPS) at Brown, the study suggests that gamers may have a more efficient process for hardwiring their visual task learning than non-gamers. “It may be possible that the vast amount of visual training frequent gamers receive over the years could help contribute to honing consolidation mechanisms in the brain, especially for visually developed skills,” the report stated.
“When we study perceptual learning we usually exclude people who have tons of video game playing time because they seem to have different visual processing. They are quicker and more accurate,” said Sasaki in a statement. “But they may be in an expert category of visual processing. We sometimes see that an expert athlete can learn movements very quickly and accurately and a musician can play the piano at the very first sight of the notes very elegantly, so maybe the learning process is also different. Maybe they can learn more efficiently and quickly as a result of training.”
I may be imagining it, but it feels like gaming has helped my driving. I’m able to ignore things that are just noise, and really focus on what I need to flow around disruption. Other drivers seem to “stick” to complex traffic formations, and slow down everyone behind them. Maybe it’s not related, but that’s what always occurs to me in those situations.
Bryan Bishop, for The Verge:
Max became a singular ’80s pop culture phenomenon that represented everything wonderful and horrible about the decade. Max hosted music video shows; Max interviewed celebrities; Max hawked New Coke; Max Headroom became US network television’s very first cyberpunk series. Max was inescapable — and then almost just as quickly as he had appeared, he was gone.
Thirty years after the premiere, I spoke with the writers, directors, producers, actors, make-up artists, and network executives that helped bring Max Headroom to life. And it all began, like so many things in the ‘80s, with music videos.
I loved Max Headroom as a kid — love the idea of Max Headroom. I knew the graphics were simulated, and that we were a long way from AI. But for me it was an era where anything seemed possible. That being said, there was no way to scare up information about the things you were interested in. It’s great to be able to get a peek behind the scenes now, decades later. Back then we just had to view these mysterious works from a great distance, and wonder how it had all come to pass. There was no instant gratification, even 20 minutes into the future.
Jonathan Webb, for BBC News:
Researchers want to learn from the ants’ cooperative methods and develop search algorithms for groups of robots.
The ants were sent aloft in a supply rocket in January 2014, and results from the experiments are published in the journal Frontiers in Ecology and Evolution.
The team is now beginning a citizen science project where schoolchildren can help collect data from other ant species – in their classrooms, rather than up in space.
Speaking to the BBC’s Science in Action, senior author Deborah Gordon said that ants have demonstrated their remarkable collective abilities in myriad environments on Earth, but the results from the microgravity conditions of the ISS were something new.
Why am I not surprised to learn that ants, with their sturdy, robot-like bodies, and cloud-like swarm intelligence, might be better suited to zero-g environments than squishy humans?