Tag Archives: visual

Ranked: The Foods With the Largest Environmental Impact – Visual Capitalist

  1. Ranked: The Foods With the Largest Environmental Impact Visual Capitalist
  2. Can swapping beef for chicken help your diet and the planet? How many steps do we really need per day? How this health news can impact your life. Yahoo Life
  3. Small dietary changes could offset carbon emissions Earth.com
  4. Study shows simple diet swaps can cut carbon emissions and improve your health Tulane University
  5. Simple dietary substitutions can reduce carbon footprints and improve dietary quality across diverse segments of the US population Nature.com
  6. View Full Coverage on Google News

Read original article here

The Image You See First In This Visual Test Reveals Your Specific Love Language – YourTango

  1. The Image You See First In This Visual Test Reveals Your Specific Love Language YourTango
  2. What you see first in this optical illusion reveals whether you’re ultra sensitive or morally strong… The US Sun
  3. Optical illusion reveals whether YOU are an empath or the natural-born star of the show – and it all depends o Daily Mail
  4. Optical Illusion Personality Test: Discover Whether You are an Extrovert, Introvert or Ambivert Times Now
  5. What you see in optical illusion shows you if you have a big heart – or you’re a loner The Mirror
  6. View Full Coverage on Google News

Read original article here

Schizophrenia Identified in 60 Seconds via Visual Fixation – Neuroscience News

  1. Schizophrenia Identified in 60 Seconds via Visual Fixation Neuroscience News
  2. Regulation of synaptic connectivity in schizophrenia spectrum by mutual neuron-microglia interaction | Communications Biology Nature.com
  3. Study identifies DNA Methylation markers linked to schizophrenia risk in newborns The News International
  4. Researchers identify DNA methylation markers linked to increased risk of schizophrenia in newborns News-Medical.Net
  5. Identification of DNA Methylation Markers in Newborns for Increased Schizophrenia Risk Neuroscience News
  6. View Full Coverage on Google News

Read original article here

Life Biosciences Presents Groundbreaking Data at ARVO Demonstrating Restoration of Visual Function in Nonhuman Primates – GlobeNewswire

  1. Life Biosciences Presents Groundbreaking Data at ARVO Demonstrating Restoration of Visual Function in Nonhuman Primates GlobeNewswire
  2. ARVO LIVE: Analysis of vision loss from GATHER clinical program Ophthalmology Times
  3. Iveric Bio Announces New Functional Vision Loss Reduction Data from Avacincaptad Pegol GATHER Trials Presented at ARVO Annual Meeting Business Wire
  4. ARVO 2023: Life Biosciences presents groundbreaking data at ARVO demonstrating Restoration of Visual Function in Nonhuman Primates Ophthalmology Times
  5. ARVO LIVE: Lexitas modified National Eye Institute scale Ophthalmology Times
  6. View Full Coverage on Google News

Read original article here

“A Visual Masterpiece” – The Hollywood Reporter

The first press reactions to Avatar: The Way of Water are in, and the response is largely gushing.

The long-long awaited sequel to director James Cameron’s fantasy epic is being called better than the 2009 original in both its story and its cutting-edge visual effects.

While some chided Cameron for the film’s length (it is more than three hours) or took issue with its myriad of characters and storylines, most seemed impressed — even overwhelmed — by the film’s underwater visuals.

Here is a sample of reactions now that the film’s social media embargo has lifted for journalists who saw screenings of the film, as well as from those attending Tuesday’s London premiere (full critic reviews will be posted Dec. 13):

Mike Ryan of Uproxx: “Yeah never bet against James Cameron. Trying to spare hyperbole, but I’ve never seen anything like this from a technical, visual standpoint. It’s overwhelming. Maybe too overwhelming. Sometimes I’d miss plot points because I’m staring at a Pandora fish. Also, I rewatched the first AVATAR over the weekend and basically settled on ‘that was fine.’ The sequel has much better and deeper character development.”

Perri Nemiroff of Collider: “#AvatarTheWayOfWater is pretty incredible. I had faith James Cameron would raise the bar w/ the effects but these visuals are mind-blowing. One stunning frame after the next. But the thing I dug most is how the technical feats always feel in service of character & world-building.”

Ian Sandwell of Digital Spy: “Unsurprisingly, #AvatarTheWayOfWater is a visual masterpiece with rich use of 3D and breathtaking vistas. It does suffer from a thin story and too many characters to juggle, yet James Cameron pulls it together for an extraordinary final act full of emotion and thrilling action.”

Yolanda Machado of EW: “James Cameron is a technology master… and his direction is at its most precise here. The film as a whole, while a technological marvel with a breathtaking world, is just …. Dances with Wolves and Free Willy for Gen Z! Pee beforehand.”

Jeff Nelson of CheatSheet: “#Avatar/#AvatarTheWayOfTheWater is a visual marvel with mesmerizing beauty in every frame. James Cameron’s sequel thrives when it explores new terrain, crafting bigger and better emotional stakes. The definition of epic.”

David Sims of The Atlantic: “AVATAR: THE WAY OF WATER absolutely owns bones. I was slapping my seat, hooting, screaming for the Na’vi to take out every last one of those dang sky people …it’s an Avatar movie: slow start, big build, incredibly involving second act with a ton of world building and cool creatures that blisses you way out, then an hour of screamingly good crystal clear emotionally trenchant action to send you home full and happy.”

David Ehrlich of IndieWire: “Avatar The Way of Water: lol imagine being dumb enough to bet against James Cameron. or teen alien Sigourney Weaver. or giant whales subtitled in papyrus. light years better than the first & easily one of the best theatrical experiences in ages. streaming found dead in a ditch.I was, uh, not exactly champing at the bit for an Avatar 2 (even if ‘James Cameron + wet’ tends to work out pretty well). now I can’t *wait* to see Avatar 3. that’s basically all I wanted out of this and it delivered in a big way.”

Kara Warner of People: “As an Avatar stan, I had high hopes for #AvatarTheWayofWater and for me it totally delivers. Sure it’s a little long, but worth it for the gorgeous visuals, wonderful new characters. A total thrill.”

Scott Mantz, Producer: “AVATAR: THE WAY OF WATER is breathtakingly beautiful with the most incredible VFX I have ever seen (I saw it in 3D); the story itself is weaker than the first and feels drawn out at 3 hours & 10 minutes, but it’s always great to look at & the last hour is amazing.”

Amon Warmann of Empire: So, #AvatarTheWayOfWater: “Liked it, didn’t love it. The good news is that 3D is good again (yay!), and the action is pretty incredible (especially in the final act). But many of the storylines feel like they have to stop and start, and the high frame rate was hit & miss for me.”

Erik Davis of Fandango: “Happy to say #AvatarTheWayOfWater is phenomenal! Bigger, better & more emotional than #Avatar, the film is visually breathtaking, visceral & incredibly engrossing. The story, the spectacle, the spirituality, the beauty – this is moviemaking & storytelling at its absolute finest.”

Drew Taylor of The Wrap: “Have now seen #Avatar twice and am overwhelmed by both its technical mastery and unexpectedly intimate emotional scope. Yes the world is expanded and sequels teased but the characters are most important. Cameron is in top form, especially in final act. Good to have him back.”

Cameron has already filmed Avatar 3 and part of the fourth installment. “I want to tell an epic story over a number of films. Let’s paint on a bigger canvas. Let’s plan it that way. Let’s do The Lord of the Rings,” he told The Hollywood Reporter in a recent cover story. “Of course, they had the books. I had to write the book first, which isn’t a book, it’s a script.”

Avatar: The Way of Water opens Dec. 16.



Read original article here

VTuber Kurune Kokuri visual novel Welcome Kokuri-san announced for PS4, Switch, PC, iOS, and Android

CyberStep [9 articles]” href=”https://www.gematsu.com/companies/cyberstep”>CyberStep has announced Welcome Kokuri-san [1 article]” href=”https://www.gematsu.com/games/welcome-kokuri-san”>Welcome Kokuri-san, a Visual Novel [196 articles]” href=”https://www.gematsu.com/genres/visual-novel”>visual novel starring Virtual YouTuber Kurune Kokuri. It will launch for PS4 [24,267 articles]” href=”https://www.gematsu.com/platforms/playstation/ps4″>PlayStation 4, Switch [12,620 articles]” href=”https://www.gematsu.com/platforms/nintendo/switch”>Switch, PC [16,394 articles]” href=”https://www.gematsu.com/platforms/pc”>PC via Steam, BOOTH, and DLsite [2 articles]” href=”https://www.gematsu.com/companies/eisys/dlsite”>DLsite, iOS, and Android in winter 2023. It will support English, Japanese, and Traditional Chinese language options.

Here is an overview of the game, via CyberStep:

About

Welcome Kokuri-san is a visual novel starring the VTuber Kurune Kokuri. The story is fully voiced, and will include scenes with realistic ASMR voices, so you can enjoy the scenario “as if you were there.”

Story

“Kokkuri-san, Kokkuri-san, please come see us.”

When you fearlessly chant the “Kokkuri-san” spell at school in the middle of the night, what appeared was a surprisingly cute little fox named Kurune Kokuri.

And so begins the investigation of the Seven Wonders of School by you and Kurune Kokuri…

What is ASMR?

ASMR stands for “Autonomous sensory meridian response” and refers to pleasant sounds that stimulate a person’s hearing senses.

Since ASMR is commonly recorded with special equipment, it can make you feel as if the voices are next to you, allowing you to become more immersed in the story.

Who is Kurune Kokuri?

Kurune Kokuri is a VTuber who was inadvertently summoned through the Japanese Ouija Board “Kokkuri-san.” Surrounded by her favorite games and delicious food, she is happily enjoying her modern life.

She is active on YouTube, live streaming games and songs with a focus on slow and soothing ASMR!

Read original article here

Cyberpunk 2077: how the FSR2 upgrade improves visual quality

Cyberpunk continues to get better with every patch. From the bug fixes, the added performance and ray tracing modes on PS5 and Series X, the input lag improvements in patch 1.6, and even Series S getting a 60fps performance mode – the game continues to evolve. Developer CD Projekt RED goes further with the new 1.61 patch, which adds AMD’s FidelityFX Super Resolution, version 2.1, into the game. This is good news for PC owners of course, but FSR2 is also integrated into the console builds too – so what kind of improvement does it bring?

In case this is new to you, FSR2 is a smart upscaling technique designed by AMD, the ideal being to render a good-looking 4K output image using just an internal 1080p image, drastically improving performance in the process. With the move to FSR2, there’s an opportunity to adjust the native rendering resolutions on every console. However, in my tests, native resolution targets on consoles generally seem unchanged and dynamic resolution scaling is still in effect. For example, on Xbox Series S’ quality mode we have 1440p as the target, though the lowest possible resolution does seem to shift, from the 1296p seen in version 1.6 to 1080p on this new patch.

It’s worth stressing that the typical rendering resolution in-between these points on Series S is similar. And likewise Series S’ performance mode targets 1080p once again as the maximum possible figure, while for the lowest point in GPU-taxing areas, Series S’ performance mode drops closer to 1344×756 – lower than the 800p we recorded before the patch. As for PS5 and Series X? They each continue to run at a constant native 1440p in their ray tracing modes, as before. FSR2 then reconstructs that to appear like a 4K image in static moments, quite convincingly I will say. And in performance mode, the resolution is more flexible, adjusting between 1728p and 1260p.

The Digital Foundry analysis of Cyberpunk 2077’s patch 1.61, focusing on the FSR2 upscaling improvements.

The key to patch 1.61’s boost to picture quality isn’t in those raw pixel counts but in the use of FSR 2.1’s image treatment, and there are multiple pros and cons to this. First of all, it’s worth pointing out that there isn’t a toggle or an option on console to enable FSR, as there is on PC. Rather, it’s fixed in place, replacing the older default temporal anti-aliasing method CDPR used. Fortunately, in most cases, this really doesn’t have a downside. FSR2 does genuinely improve image quality, whether it be in static shots, in motion, in dealing with aliasing, or even in instances of disocclusion – where objects in the foreground move, revealing previously hidden detail.

Taking the 30fps ray tracing mode as an example, the entire image is much sharper and clearer, better resolving sub-pixel detail – and just detail in general. A long view of the outskirts of night city brings this out especially well; more detail is noticeable at range, including the wording on shop signs, and the definition to swaying plantlife. It’s not all about enhancing detail, though. FSR2’s other strength is in logically recognising the elements of the screen that need to be dialled down. Any elements with visual noise, aliasing, or flicker, need to be addressed – and FSR2 does it more effectively overall – even if it’s not entirely eliminated. Indeed in the case of barbed wire fences (see the video above for for detail on this one) sometimes the flickering artfefact looks worse than the older TAAU solution, but ultimately, it’s a net win for image quality.

As for gameplay in motion? Well, here there’s a substantial upgrade to the treatment of fine elements like hair. There’s simply less break-up and more temporal stability with the processing FSR2 brings to these finer, sub-pixel details helping to reduce the distraction. FSR2 also thankfully improves – or at least greatly minimises – ghosting artefacts from CDPR’s previous solution. In others words, the obvious banding trails left behind moving objects is reduced, though not totally eliminated.

In our last Cyberpunk 2077 piece, we took a look at the 60fps performance mode added for Xbox Series S – which is now improved a touch with the latest patch.

Fast motion is the ultimate test for upscalers, and yet again, FSR2 manages to boost overall clarity as we walk, or even drive quickly forward. Inevitably there is some break-up to lateral motion, though really, it’s to be expected given how FSR2 works. During a panning shot, FSR is being fed new visual data from the screen’s edges – and during a rapid pan most of the data within the frame will be entirely different to the last. Even with such limits, Cyberpunk 2077 is still better with FSR2 than without, but moving into performance mode, the internal resolution is reduced and so the impact the algorithm makes is more limited. FSR2 on PS5, Series X and S’ performance modes still offers a boost to overall clarity. It’s also worth noting that doubling frame-rate to 60fps here gives a temporal-based solution more data to work with, meaning FSR2 has more success in motion in this mode.

Performance bears some mention. We’re used to seeing a trade-off between visuals and frame-rate, and so the question is: with all of FSR2’s benefits, is there any difference to how PS5 or Series consoles play? The truth is that the consoles always lost most performance in crowded areas – the market for example – with this likely being a CPU bottleneck that will go unaffected by FSR2. And taking PS5 as an example in its 60fps performance mode, that still holds true on patch 1.61. In side-by-sides with our last tested patch – update 1.5 – there’s a difference, though not a consistent one. Patch 1.61 sometimes pushes ahead, and sometimes falls behind. Later shootouts do show the new patch dropping more often into the 50fps region. But then, this might be incidental – given gameplay is impossible to sync all the way.

In general, PS5 and Series X, tend to exhibit a similar performance profile on patch 1.61. Drops to 50fps and under are possible, just as they were before. Adding FSR2 isn’t helping to clear the gap to a rock-solid 60fps, but evidence suggests it’s not hindering it either. Meanwhile, there is some evidence (around mirrors specifically) that Xbox Series S runs a touch faster with FSR2, though this may well be down to tweaks to dynamic resolution and/or the introduction of AMD’s upscaler. It’s not a radical difference and certainly later tests within night city don’t make the advantage so obvious. For PS5, Series X and S, the highlight is without question the improved image quality.

Overall FSR2 is a net win for all new consoles, intelligently picking out the details we want enhanced, while also addressing issues with the image, like ghosting on movement and the flickering on hair. There’s more stability, fewer distractions, and a greater push for detail at range. The only downside is that algorithm is still work-in-progress, with AMD working to improve the technology even now. Image breakup is still a problem, and in fact, on Series S specifically there’s moments where the image momentarily breaks up during basic forward movement. Cyberpunk has come a long way since its launch though. Every new patch – even incremental ones like 1.61 – seem to make an impact, showing that CD Projekt RED is far from done with the game


To see this content please enable targeting cookies.

fbq('init', '560747571485047'); fbq('init', '738979179819818');

fbq('track', 'PageView'); window.facebookPixelsDone = true;

window.dispatchEvent(new Event('BrockmanFacebookPixelsEnabled')); }

window.addEventListener('BrockmanTargetingCookiesAllowed', appendFacebookPixels);

Read original article here

How Visual Information Travels From the Retina to the Midbrain

Summary: Neurons in the midbrain receive strong, specific synaptic input from retinal ganglion cells, but only from a small number of the sensory neurons.

Source: Charite

For the first time, neuroscientists from Charité – Universitätsmedizin Berlin and the Max Planck Institute for Biological Intelligence (currently in the process of being established) have revealed the precise connections between sensory neurons inside the retina and the superior colliculus, a structure in the midbrain.

Neuropixels probes are a relatively recent development, representing the next generation of electrodes. Densely packed with recording points, Neuropixels probes are used to record the activity of nerve cells, and have facilitated these recent insights into neuronal circuits.

Writing in Nature Communications, the researchers describe a fundamental principle which is common to the visual systems of mammals and birds.

Two brain structures are crucial to the processing of visual stimuli: the visual cortex in the primary cerebral cortex and the superior colliculus, a structure in the midbrain. Vision and the processing of visual information involve highly complex processes.

In simplified terms, the visual cortex is responsible for general visual perception, whereas the structures of the evolutionarily older midbrain are responsible for visually guided reflexive behaviors.

The mechanisms and principles involved in visual processing within the visual cortex are well known. Work conducted by a team of researchers led by Dr. Jens Kremkow has contributed to our knowledge in this field and, in 2017, culminated in the establishment of an Emmy Noether Junior Research Group at Charité’s Neuroscience Research Center (NWFZ).

The primary aim of the Research Group, which is funded by the Germany Research Foundation (DFG), is to further improve our understanding of nerve cells involved in the visual system. Many unanswered questions remain, including details of the way in which visual information is processed in the midbrain’s superior colliculi.

Retinal ganglion cells, sensory cells found inside the eye’s retina, respond to external visual stimuli and send the information received to the brain. Direct signaling pathways ensure that visual information received by the retinal nerve cells also reaches the midbrain.

“What had remained largely unknown until now is the way in which nerve cells in the retina and nerve cells in the midbrain are linked on a functional level. The dearth of knowledge regarding the way in which neurons in the superior colliculi process synaptic inputs was similarly pronounced,” says study lead Dr. Kremkow.

“This information is crucial to understanding the mechanisms involved in midbrain processing.”

Until now, it had been impossible to measure the activity of synaptically connected retinal and midbrain neurons in living organisms. For their most recent research, the research team developed a method which was based on measurements obtained with innovative, high-density electrodes known as Neuropixels probes.

Precisely speaking, Neuropixels probes are tiny, linear electrode arrays featuring approximately one thousand recording sites along a narrow shank. Comprising 384 electrodes for the simultaneous recording of electric activity of neurons in the brain, these devices have become game-changers within the field of neuroscience.

Researchers working at Charité and the Max Planck Institute for Biological Intelligence have now used this new technology to determine the relevant midbrain structures in mice (superior colliculi) and birds (optic tectum).

Both brain structures have a common evolutionary origin and play an important role in the visual processing of retinal input signals in both groups of animals.

Their work led the researchers to a surprising discovery: “Usually, this type of electrophysiological recording measures electrical signals from action potentials which originate in the soma, the neuron’s cell body,” explains Dr. Kremkow.

“In our recordings, however, we noticed signals whose appearance differed from that of normal action potentials. We went on to investigate the cause of this phenomenon, and found that input signals in the midbrain were caused by action potentials propagated within the ‘axonal arbors’ (branches) of the retinal ganglion cells. Our findings suggest that the new electron array technology can be used to record the electrical signals emanating from axons, the nerve cell projections which transmit neuronal signals. This is a brand-new finding.”

In a global first, Dr. Kremkow’s team was able to simultaneously capture the activity of nerve cells in the retina and their synaptically connected target neurons in the midbrain.

Until now, the functional wiring between the eye and midbrain had remained an unknown quantity. The researchers were able to show at the single-cell level that the spatial organization of the inputs from retinal ganglion cells in the midbrain constitutes a very precise representation of the original retinal input.

Retinal nerve cell signals travel via a mosaic of nerve tracts to be further processed by neurons in the midbrain. Represented by lines: electrical signals from the axonal branches of a retinal ganglion cell, which were measured simultaneously at the tiny electrodes of the Neuropixels probes in the midbrain. Credit: Charité | Jens Kremkow & Fotostudio Farbtonwerk I Bernhardt Link

“The structures of the midbrain effectively provide an almost one-to-one copy of the retinal structure,” says Dr. Kremkow.

He continues: “Another new finding for us was that the neurons in the midbrain receive a very strong and specific synaptic input from retinal ganglion cells, but only from a small number of these sensory neurons. These neural pathways enable a very structured and functional connection between the eye’s retina and the corresponding regions of the midbrain.”

Among other things, this new insight will enhance our understanding of the phenomenon known as blindsight, which can be observed in individuals who have sustained damage to the visual cortex due to trauma or tumor.

Incapable of conscious perception, these individuals retain a residual ability to process visual information, which results in an intuitive perception of stimuli, contours, movement and even colors that appears to be linked to the midbrain.

To test whether the principles initially observed in the mouse model could also apply to other vertebrates – and hence whether they could be more general in nature – Dr. Kremkow and his team worked alongside a team from the Max Planck Institute for Biological Intelligence, where a Lise Meitner Research Group led by Dr. Daniele Vallentin focuses on neuronal circuits responsible for the coordination of precise movements in birds.

“Using the same types of measurements, we were able to show that, in zebra finches, the spatial organization of the nerve tracts connecting the retina and midbrain follow a similar principle,” says Dr. Vallentin.

She adds: “This finding was surprising, given that birds have significantly higher visual acuity and the evolutionary distance between birds and mammals is considerable.”

See also

The researchers’ observations suggest that the retinal ganglion cells in both the optical tectum and the superior colliculi show similar spatial organization and functional wiring. Their findings led the researchers to conclude that the principles discovered must be crucial to visual processing in the mammalian midbrain. These principles may even be general in nature, applying to all vertebrate brains, including those of humans.

Regarding the researchers’ future plans, Dr. Kremkow says: “Now that we understand the functional, mosaic-like connections between retinal ganglion cells and neurons within the superior colliculi, we will further explore the way in which sensory signals are processed in the vision system, specifically in the regions of the midbrain, and how they contribute to visually-guided reflexive behavior.”

The team also want to establish whether the new method might be used in other structures and whether it could be used to measure axonal activity elsewhere in the brain. Should this prove possible, it would open up a wealth of new opportunities to explore the brain’s underlying mechanisms.

About this visual neuroscience research news

Author: Manuela Zingl
Source: Charite
Contact: Manuela Zingl – Charite
Image: The image is credited to Charité | Jens Kremkow & Fotostudio Farbtonwerk I Bernhardt Link

Original Research: Open access.
“High-density electrode recordings reveal strong and specific connections between retinal ganglion cells and midbrain neurons” by Jens Kremkow et al. Nature Communications


Abstract

High-density electrode recordings reveal strong and specific connections between retinal ganglion cells and midbrain neurons

The superior colliculus is a midbrain structure that plays important roles in visually guided behaviors in mammals. Neurons in the superior colliculus receive inputs from retinal ganglion cells but how these inputs are integrated in vivo is unknown.

Here, we discovered that high-density electrodes simultaneously capture the activity of retinal axons and their postsynaptic target neurons in the superior colliculus, in vivo.

We show that retinal ganglion cell axons in the mouse provide a single cell precise representation of the retina as input to superior colliculus.

This isomorphic mapping builds the scaffold for precise retinotopic wiring and functionally specific connection strength. Our methods are broadly applicable, which we demonstrate by recording retinal inputs in the optic tectum in zebra finches.

We find common wiring rules in mice and zebra finches that provide a precise representation of the visual world encoded in retinal ganglion cells connections to neurons in retinorecipient areas.

Read original article here

HBO Will Fix House of the Dragon Visual Effects Glitch Spotted by Fans

The visual effects glitch spotted by eagle-eyed fans in the third episode of House of the Dragon will be fixed by HBO and revised on streaming platforms.

Variety notes that HBO is “planning to update the error on its streaming platforms” after the mistake was brought to light following last Sunday’s House of the Dragon episode, titled “Second of His Name.” Viewers spotted the VFX blunder during a scene featuring King Viserys, as played by Paddy Considine, in which he hands a rolled-up letter over to a soldier.

Two of the king’s fingers are completely green, as they were likely wrapped in a piece of green fabric to be edited in post-production. The intention may have been to remove the fingers altogether to show the progression of Viserys’ mysterious illness, presuming that the maggot treatment he received in Episode 2 was not successful in stopping “the advance of the rot.”

This VFX gaffe reminded some fans of the infamous Game of Thrones moment from the fourth episode of the final season, wherein viewers spotted a rogue disposable coffee cup left on a table in one of the scenes. After making the rounds on social media, HBO issued an appropriately funny response to the error and digitally removed the cup from the scene.

14 Big Background Mistakes in Movies and TV

There’s a good chance that House of the Dragon’s green-fingered glitch will be corrected before the fourth episode airs on September 11.

The first episode of House of the Dragon, which is now available on YouTube, pulled in close to 10 million viewers across HBO and HBO Max, making it the largest audience for any new show in HBO’s history. Less than a week after its premiere, HBO committed to telling more stories from the epic saga of House Targaryen by renewing the Game of Thrones prequel for a second season.

Adele Ankers-Range is a freelance writer for IGN. Follow her on Twitter.



Read original article here

Apple’s iPhone: a visual history from 2007 through 2022

Fifteen years ago, Steve Jobs introduced the very first iPhone. He described it as three devices in one: a “widescreen iPod with touch controls, a revolutionary mobile phone, and a breakthrough internet communications device.”

But since its first unveiling, the iPhone has become much more than that. It’s a symbol of the tech industry, of the modern era as a whole, and has made Apple the largest company in the world in terms of market capitalization. In 2015, it was speculated to be the most profitable product ever and helped grow Apple’s market cap to not just $1 trillion or $2 trillion — but as high as $3 trillion.

One reason for that is the steady pace of progress Apple has made to keep the iPhone at the top of the fray. It started with a compact device that focused on user interface and software, then upgraded network speeds and processor specs. Later, it added incrementally better cameras, followed the market to add bigger screens, and kept introducing new software and security features.

A decade and a half on, the iPhone is still making headlines. Let’s take a look at how the iPhone has changed over the years:

iPhone (2007)

Image: Apple

This is the iPhone as it first appeared in 2007, laying the foundation for the modern smartphone. It introduced the classic grid of icons, the single home button, and dropped a physical keyboard — a standard for smartphones at the time — in favor of a multitouch display. It was ready for the internet and consuming media, but it still lacked a number of key features, including 3G connectivity and the App Store.

iPhone 3G (2008)

Image: Apple

The next iPhone launched in 2008 with that missing piece of the puzzle: the App Store. This gave developers the chance to build their own applications and increased the iPhone’s value as useful apps and games populated its digital shopfront. The iPhone 3G also had 3G data as well as push email and GPS navigation.

iPhone 3GS (2009)

Image: Apple

The first “S” model iPhone offered iterative improvements rather than big new features. Apple said it was twice as fast as its predecessor, with the “S” standing for speed. It retained the same basic shape as earlier models, including a 3.5-inch, 480 x 320 display. Oh, and users finally got the option to copy and paste text.

iPhone 4 (2010)

Image: Apple

The first major redesign of the iPhone bought stainless steel and glass to the table as well as a new squarer look with rounded corners. It was unveiled as the thinnest smartphone in the world and was the first Apple device to use a “Retina display.” It was also the first iPhone with a front-facing camera for making FaceTime video calls, and it shipped with iOS 4, which was capable of multitasking apps.

Image: The Verge

The fifth-generation iPhone looked identical to its predecessor but shipped with Siri — Apple’s voice assistant, which was ahead of its time but a little too ambitious. The phone also came with a new, rear-facing 8-megapixel camera and redesigned antenna to fix connectivity problems that plagued the iPhone 4. It was unveiled on October 4th; Apple founder Steve Jobs died the following day. Verge score: 8.6 out of 10.

The second major redesign of the iPhone had its screen grow to four inches, following the smartphone market that was trending toward larger devices. It was still compact and easy to handle with one hand and was built with an aluminum chassis that lightened the device but remained durable. The iPhone 5 also introduced the reversible Lightning connector, replacing the old 30-pin port. Verge score: 8.8 out of 10.

Photo by Michael Shane / The Verge

The iPhone 5S retained the 5’s design but replaced the home button with Touch ID, Apple’s fingerprint scanner, which was influential enough to make fingerprint scanners a standard feature across the smartphone industry. The device featured the first 64-bit processor in a smartphone with the A7 chip. It also shipped with iOS 7, a major overhaul of Apple’s mobile operating system that dropped various skeuomorphic design touches (like fake textures in apps) for a flatter, cleaner look. Verge score: 8.8 out of 10.

2013 marked the first time Apple announced two iPhones in one day. The cheaper of the two was the colorful iPhone 5C, which had similar specs to the prior year’s iPhone 5 but came with a polycarbonate shell that was famously described by designer Jony Ive as “unapologetically plastic.” At $549, investors were worried that the price wasn’t low enough to compete with lower-cost Android devices, and the phone was seen as a relative failure compared to the iPhone 5S. Verge score: 8.5 out of 10.

In 2014, Apple finally went big with the iPhone, introducing the 4.7-inch iPhone 6 and the 5.5-inch iPhone 6 Plus. Both phones featured a new design with curved edges, introduced NFC support for mobile payments, and included improved cameras — which had become the iPhone’s standout feature. The larger, lighter phones weren’t as sturdy as previous models, though, and “Bendgate” was the Apple scandal of 2014. Verge score: 9 out of 10 (iPhone 6), 8.7 out of 10 (iPhone 6 Plus).

Another S year meant another Similar-looking iPhone. The glass was tougher, and the aluminum case was less prone to bending on the iPhone 6S and 6S Plus, but not much else had changed. The 6S phones added a pressure-sensitive display called 3D Touch that allowed you to quickly access menus and previews by pushing into the screen until it “popped” with haptic feedback. It was a clever feature but had some learning complexity, and Apple would drop the feature from other iPhones four years later. Verge score: 9 out of 10.

The beginning of 2016 brought a surprise: the midcycle iPhone SE — a $399 device that looked exactly like the two-and-a-half-year-old iPhone 5S but with speedy new hardware inside. It was the company’s first real attempt to add an affordable entry iPhone option after the not-so-inexpensive iPhone 5C. The four-inch screen was perfect for people who didn’t quite feel ready to move on to a larger device, but it was clear Apple thought big iPhones were the future. Verge score: 8.7 out of 10.

Photo by James Bareham / The Verge

The iPhone 7 and 7 Plus refined what the iPhone 6S offered by adding better cameras and water resistance — but avoided a complete redesign. The plus model included a new dual-camera system that added 2x zoom and a Portrait Mode that enabled a virtual shallow depth-of-field on subjects. Both models dropped the mechanical home button in favor of a fully digital lookalike, and, yes, the headphone jack was removed. Apple called it “courage,” while critics called it arrogance. Either way, there was no going back. Verge score: 9 out of 10.

Photo by James Bareham / The Verge

The iPhone 8 and 8 Plus weren’t Apple’s revolutionary new takes to mark 10 years of iPhone. Instead, they were refreshes of the company’s hugely successful 4.7-inch and 5.5-inch form factors that began with the iPhone 6. Both the iPhone 8 and 8 Plus could easily have been an S update of the iPhone 7 models — with just improved cameras and processors and an all-glass rear to accommodate new wireless charging coils. All that’s fine, though, because these models released under the shadow of the iPhone that’s actually fit to represent 10 years of iPhone. Verge score: 8 out of 10.

Photo by James Bareham / The Verge

The iPhone X (that’s 10, folks) broke the visual mold of every iPhone before it by shedding the physical home button and adding an edge-to-edge OLED display to an all-new stainless steel chassis. It was the first to include Face ID biometrics instead of a fingerprint reader and introduced silly Animoji characters for iMessage that can mirror your facial expressions. It didn’t come cheap: the iPhone X started at $999 — an unfortunately successful market test that forever inflated premium smartphone prices. Verge Score: 9 out of 10

Photo by James Bareham / The Verge

The iPhone XS looked similar to the previous year’s iPhone X with minor improvements, but this time, Apple made a plus-sized “Max” version with a huge 6.5-inch display. Apple was once slow to build larger iPhones, but with the iPhone XS Max, it eagerly embraced them. This became the era where Apple associated bigger screens with top-of-the-line features. All that was missing was the “Pro” moniker. Verge Score: 8.5 out of 10

Photo by Amelia Holowaty Krales / The Verge

While the iPhone X released alongside the legacy-style iPhone 8, the XS didn’t get an iPhone 9 as a running mate. Instead, Apple introduced the iPhone XR. This lower-end model traded the premium stainless steel chassis of the XS for colorful aluminum, dropped the telephoto camera and 3D Touch, and downgraded the screen to LCD instead of OLED. But the iPhone XR still gave users a big screen, Face ID, the latest processor, and excellent battery life. At a reasonable $749 starting price, the iPhone XR was excellent value for the money. Verge Score: 8 out of 10

Photo by Amelia Holowaty Krales / The Verge

The iPhone 11 borrowed the iPhone XR’s form factor, but this time, Apple made it the year’s main-line device. Apple added a new ultrawide camera that let you fit more in a photo scene, and it offered multiple colors like with the XR. The cost of entry dropped to $699, too, though it still came with a small 64GB of storage. Verge Score: 9 out of 10

Photo by Amelia Holowaty Krales / The Verge

The iPhone went Pro for the first time with the iPhone 11 Pro and Pro Max in 2019. These iPhones were the spiritual successors of the iPhone XS and XS Max, but instead of just two cameras, they added a third one for ultrawide shots. The 11 Pros came with an 18W fast charger and a USB-C to Lightning cable out of the box. The rear of the iPhone 11 Pro model is glass, but instead of a glossy finish, it was matte — and it came in a striking midnight green color. Verge Score: 9 out of 10

You’d be forgiven if you mistook this second-generation iPhone SE for an iPhone 8. They’re essentially the same size and shape, but the updated iPhone SE benefitted from better performance and a slightly better camera that included portrait mode. The first iPhone SE from 2016 had a cult following for its small size, but the second-generation SE instead switched to the larger 4.7-inch size of more recent iPhones. What the new SE did mirror from its predecessor was the inclusion of modern specs at a cheap price — starting at $399. Verge Score: 8.5 out of 10

Photo by Vjeran Pavic / The Verge

iPhone 12 brought back the flat-edged, glass sandwich design that Apple first used on the iPhone 4 in 2010. The 12 had a wide and ultrawide camera like the iPhone 11, but it traded in the LCD for an OLED screen. And small phone enthusiasts rejoiced as Apple revealed a shrunken version: the iPhone 12 mini. iPhone 12 introduced 5G cellular radios and Apple’s MagSafe charging that lets you mount your phone and charge wirelessly at potentially faster speeds than standard wireless chargers. Verge Score: 9 out of 10

Photo by Amelia Holowaty Krales / The Verge

iPhone 12 Pro offered some extra features with snazzier trimmings compared to the similarly designed iPhone 12. You get a stainless steel frame, a matte glass finish on the back, and a telephoto camera, on top of the upgrade to 5G. In true for-the-pros fashion, the iPhone 12 Pro has a lidar scanner for 3D mapping applications and for assisting with portrait mode. The phone can also shoot photos in ProRes RAW format. And Apple made its largest screened iPhone yet with the iPhone 12 Pro Max’s huge 6.7-inch screen. Verge Score: 9 out of 10

Photo by Vjeran Pavic / The Verge

iPhone 13 could easily have been an “S” year phone, resembling the 12 in most ways. The cameras got upgraded with similar sensors to those in the previous year’s iPhone 12 Pro (but still had only two cameras), and the lenses got rearranged in a diagonal alignment instead of a vertical one. Apple did give both of the 13 models a noticeable boosts in battery life compared to the 12, and Apple finally stopped shipping its flagships with 64GB of storage by making 128GB the new entry size. Verge Score: 9 out of 10

Photo by Vjeran Pavic / The Verge

At first glance, the iPhone 13 Pro looks just like the 12 Pro, but next to each other, you can see the camera lenses have grown — hinting at the array of camera updates in these models. The 13 Pro came with 3x optical zoom, support for macro photography, and better low-light performance on the wide and ultrawide cameras. Video was better than ever, with an option for shooting ProRes footage, and a new Cinematic Mode created an impression of depth-of-field between subjects while shooting. With the iPhone 13 Pro series, Apple finally added ProMotion high refresh rate screens that had become a standard on most flagship Android devices. Verge Score: 9 out of 10

Photo by Allison Johnson / The Verge

The third-generation iPhone SE is almost a carbon copy of the 2020 model, but it received the latest processor and got a new 5G cellular radio. Those upgrades couldn’t keep the original price point, though, as the phone’s cost of entry raised to $429 — still for just 64GB of storage. Verge Score: 8 out of 10

Apple announced its “Far out” event for September 7th, where it’s expected to reveal the new iPhone 14 lineup. Rumors suggest the company will introduce a Max-sized iPhone 14, making the new projected lineup: iPhone 14, 14 Max, 14 Pro, and 14 Pro Max.

This time around, the non-pro models might retain last year’s A15 Bionic chipset, with the Pro differentiator being a new A16 chipset. We may even see the notch get nicked in favor of smaller camera and Face ID cutouts. And unfortunately for some, the iPhone mini may not make a return.

Read original article here