Agregátor RSS
CISA orders feds to patch actively exploited Dell flaw within 3 days
How infostealers turn stolen credentials into real identities
WoWee, klient pro hru World of Warcraft
This ‘Machine Eye’ Could Give Robots Superhuman Reflexes
Running on a brain-like chip, the ‘eye’ could help robots and self-driving cars make split-second decisions.
You’re driving in a winter storm at midnight. Icy rain smashes your windshield, immediately turning it into a sheet of frost. Your eyes dart across the highway, seeking any movement that could be wildlife, struggling vehicles, or highway responders trying to pass. Whether you find safe passage or meet catastrophe hinges on how fast you see and react.
Even experienced drivers struggle with bad weather. For self-driving cars, drones, and other robots, a snowstorm could cause mayhem. The best computer-vision algorithms can handle some scenarios, but even running on advanced computer chips, their reaction times are roughly four times greater than a human’s.
“Such delays are unacceptable for time-sensitive applications…where a one-second delay at highway speeds can reduce the safety margin by up to 27m [88.6 feet], significantly increasing safety risks,” Shuo Gao at Beihang University and colleagues wrote in a recent paper describing a new superfast computer vision system.
Instead of working on the software, the team turned to hardware. Inspired by the way human eyes process movement, they developed an electronic replica that rapidly detects and isolates motion.
The machine eye’s artificial synapses connect transistors into networks that detect changes in the brightness of an image. Like biological neural circuits, these connections store a brief memory of the past before processing new inputs. Comparing the two allows them to track motion.
Combined with a popular vision algorithm, the system quickly separates moving objects, like walking pedestrians, from static objects, like buildings. By limiting its attention to motion, the machine eye needs far less time and energy to assess and respond to complex environments.
When tested on autonomous vehicles, drones, and robotic arms, the system sped up processing times by roughly 400 percent and, in most cases, surpassed the speed of human perception without sacrificing accuracy.
“These advancements empower robots with ultrafast and accurate perceptual capabilities, enabling them to handle complex and dynamic tasks more efficiently than ever before,” wrote the team.
Two Motion PicturesA mere flicker in the corner of an eye captures our attention. We’ve evolved to be especially sensitive to movement. This perceptual superpower begins in the retina. The thin layer of light-sensitive tissue at the back of the eye is packed with cells fine-tuned to detect motion.
Retinal cells are a curious bunch. They store memories of previous scenes and spark with activity when something in our visual field shifts. The process is a bit like an old-school film reel: Rapid transitions between still frames lead to the perception of movement.
Every cell is tuned to detect visual changes in a particular direction—for example, left to right or up to down—but is otherwise dormant. These activity patterns form a two-dimensional neural map that the brain interprets as speed and direction within a fraction of a second.
“Biological vision excels at processing large volumes of visual information” by focusing only on motion, wrote the team. When driving across an intersection, our eyes intuitively zero in on pedestrians, cyclists, and other moving objects.
Computer vision takes a more mathematical approach.
A popular type called optical flow analyzes differences between pixels across visual frames. The algorithm segments pixels into objects and infers movement based on changes in brightness. This approach assumes that objects maintain brightness as they move. A white dot, for example, remains a white dot as it drifts to the right, at least in simulations. Pixels near each other should also move in tandem as a marker for motion.
Although inspired by biological vision, optical flow struggles in real-world scenarios. It’s an energy hog and can be laggy. Add in unexpected noise—like a snowstorm—and robots running optical flow algorithms will have trouble adapting to our messy world.
Two-Step SolutionTo get around these problems, Gao and colleagues built a neuron-inspired chip that dynamically detects regions of motion and then focuses an optical flow algorithm on only those areas.
Their initial design immediately hit a roadblock. Traditional computer chips can’t adjust their wiring. So the team fabricated a neuromorphic chip that, true to its name, computes and stores information at the same spot, much like a neuron processes data and retains memory.
Because neuromorphic chips don’t shuttle data from memory to processors, they’re far faster and more energy-efficient than classical chips. They outshine standard chips in a variety of tasks, such as sensing touch, detecting auditory patterns, and processing vision.
“The on-device adaptation capability of synaptic devices makes human-like ultrafast visual processing possible,” wrote the team.
The new chip is built from materials and designs commonly used in other neuromorphic chips. Similar to the retina, the array’s artificial synapses encode differences in brightness and remember these changes by adjusting their responses to subsequent electrical signals.
When processing an image, the chip converts the data into voltage changes, which only activate a handful of synaptic transistors; the others stay quiet. This means the chip can filter out irrelevant visual data and focus optical flow algorithms on regions with motion only.
In tests, the two-step setup boosted processing speed. When analyzing a movie of a pedestrian about to dash across a road, the chip detected their subtle body position and predicted what direction they’d run in roughly 100 microseconds—faster than a human. Compared to conventional computer vision, the machine eye roughly doubled the ability of self-driving cars to detect hazards in a simulation. It also improved the accuracy of robotic arms by over 740 percent thanks to better and faster tracking.
The system is compatible with computer vision algorithms beyond optical flow, such as the YOLO neural network that detects objects in a scene, making it adjustable for different uses.
“We do not completely overthrow the existing camera system; instead, by using hardware plug-ins, we enable existing computer vision algorithms to run four times faster than before, which holds greater practical value for engineering applications,” Gao told the South China Morning Post.
The post This ‘Machine Eye’ Could Give Robots Superhuman Reflexes appeared first on SingularityHub.
Před 40 lety Sověti vypustili stanici Mir. I přes její divoký provoz předběhla dobu a nakonec se stala základem pro vývoj ISS (galerie)
ThreatsDay Bulletin: OpenSSL RCE, Foxit 0-Days, Copilot Leak, AI Password Flaws & 20+ Stories
ThreatsDay Bulletin: OpenSSL RCE, Foxit 0-Days, Copilot Leak, AI Password Flaws & 20+ Stories
Nigerian man gets eight years in prison for hacking tax firms
30 nejlepších filmů o druhé světové válce. Pianista, Stalingrad, Ponorka… Ale i čeští Nebeští jezdci
DEF CON bans three Epstein-linked men from future events
Cybersecurity conference DEF CON has added three men named in the Epstein files to its list of banned individuals. They are not accused of any criminal wrongdoing.…
Čtečka PocketBook Era zlevnila na minimum. V našem testu porazila i Kindle
Data stored in glass could last over 10,000 years, Microsoft says
Enterprises struggling with the cost and complexity of long-term data archival could soon have a new option: a piece of glass.
New research published on Wednesday suggests that a borosilicate glass plate 120mm square and just 2mm thick can store 4.8TB of data across 301 layers with accelerated aging tests, indicating that the data would remain intact for at least 10,000 years.
“Glass is a permanent data storage material that is resistant to water, heat, and dust,” Microsoft researchers wrote in a paper published in the science and technology journal, Nature. “We have unlocked the science for parallel high-speed writing and developed a technique to permit accelerated aging tests on the written glass, suggesting that the data should remain intact for at least 10,000 years.”
Previous versions of the technology required fused silica, a high-purity glass available from only a handful of manufacturers. The new findings show the system works equally well with borosilicate — widely manufactured and significantly cheaper — bringing the technology a step closer to commercial viability, the paper added.
The timing is significant.
The global datasphere is doubling approximately every three years, according to Seagate research cited in the paper, yet “most digital archive systems rely on media that degrade” well short of the multi-decade retention timescales that legal, financial, and regulatory obligations increasingly demand, the authors noted.
Magnetic tape, the most widely deployed archival medium today, reflects those constraints. An LTO-10 (Linear Tape-Open) cartridge, the current enterprise benchmark, holds 30TB to 40TB native at 400MB/s, but its rated shelf life is just 30 years. It requires climate-controlled storage between 16°C and 25°C and migration roughly every five to ten years.
That operational overhead, analysts say, is the real cost of tape — not the media. “Archival estates rarely fail because cartridges chemically degrade on schedule,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. “They fail because compatibility windows close, drive generations evolve, firmware support sunsets, and robotics require refresh.”
Tape-as-a-service models have shifted some of that burden, noted Vishesh Divya, principal analyst at Gartner, moving hardware lifecycle management to providers under defined service-level assurances.
“LTO tape remains the benchmark for enterprise cold storage,” he said. “The media cost per terabyte remains low, the ecosystem is mature, and enterprises have decades of operational experience managing refresh cycles,” Divya said.
Sony’s Optical Disc Archive — the main optical alternative at 5.5TB per cartridge with a 100-year rated shelf life — was discontinued in March 2025, leaving no comparable product on the market.
How data is written and read from the glassProject Silica, Microsoft’s glass-based storage initiative, uses femtosecond laser pulses to encode data as three-dimensional structures called voxels inside the glass, at 25.6 megabits per second per beam and an energy cost of 10.1 nanojoules per bit.
The paper describes two encoding methods. The first method, birefringent voxels, modifies the polarization properties of the glass. The team reduced the laser pulses required to a pseudo-single-pulse technique — one pulse split to simultaneously begin one voxel and complete another — enabling faster beam scanning.
The second method, phase voxels, is a new invention that modifies the phase properties of the glass instead and requires only a single pulse per voxel. Crucially, it works in borosilicate glass, where the birefringent approach did not. “Much higher levels of three-dimensional inter-symbol interference in phase voxels can be mitigated with a machine learning classification model,” the researchers wrote.
Earlier Project Silica readers required three or four cameras to retrieve data; the updated system requires one, completing what the researchers described as the first fully demonstrated end-to-end glass archival system, from writing and storage through to retrieval.
Longevity verified through accelerated agingThe longevity claim is backed by a nondestructive optical method the team developed to measure voxel degradation in place, combined with accelerated aging techniques applied to written borosilicate samples. “Accelerated ageing tests on written voxels in borosilicate suggest data lifetimes exceeding 10,000 years,” the researchers noted.
For enterprise buyers, longevity alone will not make the case.
“A realistic TCO comparison must be modelled across multi-decade lifecycle horizons, not procurement cycles,” said Gogia. “Glass storage reframes the economic curve by potentially eliminating migration cycles — reducing labour, reconciliation overhead, and operational disruption.” Write speeds remain materially slower than tape, however, making glass better suited to ultra-cold, low-ingestion estates.
Compliance adds a further dimension. Data encoded as permanent optical modifications cannot be overwritten, reducing ransomware exposure. But “compliance is a system property, not a substrate property,” Gogia cautioned. “Enterprises must still ensure encryption key rotation, metadata indexing, and audit trail completeness. A 10,000-year medium does not remove the obligation to demonstrate governance discipline.”
No commercial product yetMicrosoft said in a separate blog post that the research phase of Project Silica is now complete. “We are continuing to consider learnings from Project Silica as we explore the ongoing need for sustainable, long-term preservation of digital information,” the company said, without disclosing a commercialization roadmap. If commercialized, glass storage is unlikely to displace tape.
“It is more likely to emerge as a specialized ultra-long retention rather than a replacement for tape-based cold storage,” said Gartner’s Divya. “Any new medium would have to compete on the full-stack equation — economics, hardware, software, and operational model — not just on media longevity.”
Texas sues TP-Link over Chinese hacking risks, user deception
EU představila nový nástroj pro posílení bezpečnosti dodavatelských řetězců v ICT - ICT Supply Chain Security Toolbox
Hackers target Microsoft Entra accounts in device code vishing attacks
5 ways Gemini can help you make Google Slides presentations
Gemini, Google’s generative AI assistant, has various tools you can access within Google Slides to assist you in creating and editing your presentations. Additionally, you can generate whole presentations in the standalone Gemini app and then export them into Slides to work on.
Anyone with a Google account can use the Gemini app for free. To use Gemini within Google Slides, you need to be subscribed to a Google Workspace Business Standard, Business Plus, or Enterprise plan or a Google AI plan. Alternatively, you can sign up for Google Workspace Labs with a free Google account to use some of the features described in this story.
Note: If you use Google Workspace at work, your administrator may need to enable permission for Gemini to be used in Google Slides.
In this guide we’ll cover how to use all these features. If you’re new to Google Slides or need a refresher, see our Google Slides cheat sheet first to get up and running.
As when using any generative AI tool, remember that Gemini can make mistakes or simply make things up, so you should always check its output for accuracy.
In this article:
- Use Gemini to create a presentation
- Use Gemini to create individual slides
- Use Gemini to generate images for your slides
- Use Gemini to edit images in your slides
- Use Gemini to polish text in your slides
You can prompt Gemini to create an entire presentation with multiple slides. This isn’t a function built into Slides, as of this writing. Instead you do this through a feature called Canvas in the Gemini web app. Based on your prompt, the AI can generate a set of slides that you then export to Slides as a single presentation.
Open the Gemini web app in your browser. At the bottom of the chat box, click Tools. From the drop-down menu that opens, select Canvas.
In the Gemini web app, select the Canvas tool to get started.
Howard Wen / Foundry
Then, inside the chat box, type your prompt: It should be detailed and specifically outlined. First you should state “Build a presentation” or “Create a presentation.” Next, you can describe general or specific information that you want to have on each slide. Consider these examples:
- Create a presentation for a digital marketing agency with the goal of selling its services to potential clients.
- Build me a presentation template that reports my company’s quarterly earnings. Include these slides in it:
- Slide 1 is the intro with text explaining the purpose of this presentation.
- Slide 2 shows a chart template for Quarter 1.
- Slide 3 shows a chart template for Quarter 2.
- Slide 4 shows a chart template for Quarter 3.
- Slide 5 shows a chart template for Quarter 4.
- Slide 6 is the conclusion.
You can also click the + symbol on the toolbar to attach a file from your Google Drive or upload one from your PC, such as a document or spreadsheet. Gemini will try to use this as a source to create a series of slides for a presentation. Examples:
- Build a presentation using the attached spreadsheet. Use the attached document to create slides for a presentation.
When you’re finished writing your prompt, press the Enter key or click the right arrow icon at the lower right of the chat box. Gemini will get to work. It’ll generate a presentation that appears in the chat window above the chat box. You can preview it by scrolling through it.
A preview of the generated presentation appears in the main window of the Gemini app.
Howard Wen / Foundry
If you like what you see and want to export it to Slides, click the Export or Export to Slides button at the upper right of the presentation preview. The presentation will be saved as a Google Slides file in your Google Drive.
For more details about using Canvas in the Gemini app, including editing Gemini’s output, see our Gemini Canvas guide.
2. Use Gemini to create individual slidesWith a paid Google Workspace account, you can use the Gemini sidebar right in the Slides app to help you create and edit various elements of your presentation. For starters, you can have Gemini generate individual slides. These can serve as templates that you add your information to, or they can be initial drafts that include details specific to your needs.
See our Google Slides cheat sheet for detailed steps for using Gemini to generate slides.
3. Use Gemini to generate images for your slidesYou can also prompt Gemini to create an image and insert it onto a slide.
With your presentation open, navigate to the slide where you want to add an image, or add a new slide to put the image on.
On the vertical toolbar to the right of the slide, click the Generate an image button (an icon of a picture with a sparkle at upper right). Alternatively, on the menu bar along the top, select Insert > Image > Generate an image. The Gemini sidebar will open to the right of your presentation, showing a set of tools for creating images.
When you select Generate an image, the Gemini sidebar displays image creation tools.
Howard Wen / Foundry
Inside the chat box, write a prompt that describes the image you want Gemini to create. Use your imagination!
- Draw an image of a US dollar symbol with an arrow pointing up behind it.
- Generate an image of a man pointing at a chalkboard with the word GOALS written on it.
You can also click Add a style in the sidebar and select a specific art style for Gemini to use when rendering your image, such as Photography, Vector art, or Watercolor.
Gemini will generate four images based on your prompt and display thumbnails in the sidebar. Click one of these thumbnails to see a larger preview of how it will look on your slide. You can click the < and > buttons to cycle through the other generated images.
Previewing the images Gemini generated.
Howard Wen / Foundry
If you decide on one that you like, click its Insert button. The image will be placed onto your slide where the cursor is.
If you don’t like any of the images that Gemini generated, revise your prompt or write a new prompt in the chat box to have Gemini generate another round.
4. Use Gemini to edit images in your slidesGemini can help you make AI-generated alterations to images in your presentation. It can replace the background of an image, or you can edit the image in a variety of ways, such as having Gemini change its size or add elements to it that come from your imagination. While it may sound like these capabilities are meant to be applied to photographs, you can also experiment with charts or other illustrations to generate interesting results.
With your presentation open, navigate to the slide that has the image you want to edit. Select the image on the slide. On the small toolbar that appears below the selected image, click Edit image. Alternatively, you can right-click on the image and select Edit image on the menu that opens.
Select the image you want to change and click Edit image on the toolbar that appears below.
Howard Wen / Foundry
This action will open the Gemini image editing sidebar to the right of your presentation. On it, there are two options that you can select: Edit image and Replace background. Click the one you want to use.
In the Gemini image editing sidebar, your choices are Edit image or Replace background.
Howard Wen / Foundry
Inside the chat box, you’ll be shown an example prompt. Type in a prompt that describes the change that you want Gemini to make, and click the Create button.
“Replace background” examples:
- Set this chart against a cityscape at nighttime.
- Add drawings of small seashells as subdued elements behind this logo.
“Edit image” examples:
- Widen horizontally this photo to the edges of the slide.
- Add a person sitting at this laptop.
Gemini will generate an altered version of the image based on your prompt and show a thumbnail in the sidebar. Click View more to have Gemini generate another image.
A thumbnail of the altered image appears in the Gemini sidebar.
Howard Wen / Foundry
Click a thumbnail to see a preview of how it will look on your slide. You can click the < or > buttons to cycle through the other generated images.
If you decide on one that you like, click the Replace button. The AI-generated image will replace the one on your slide. Or you can click the down arrow to the right of the Replace button and click Insert. The generated image will be inserted where the cursor is.
5. Use Gemini to polish text in your slidesAnother way to use Gemini in Slides is to have it rewrite blocks of text on your slide. This text-polishing feature can make large passages more concise (and thus more legible when you present them to others) or change the tone of the writing.
With your presentation open, navigate to the slide that contains the text that you want to polish, such as a list, paragraph, or title. Click this block of text, then click the Refine button (a pencil with sparkle icon) at the lower left of your selected text. This opens the Refine menu with the following options: Shorten, Rephrase, More formal, or Bulletize.
Options for refining text on a slide.
Howard Wen / Foundry
In some cases, you may find that Gemini has preselected one of these options for you. For example, if your text doesn’t fit into the space allotted to it on the slide, Gemini might show Shorten as the default option. You can click the down arrow next to Shorten to access the other refinement options.
You can select one of these options or click inside the box with the words Modify with a prompt, where you can type a prompt that describes how you want Gemini to rewrite the selection. Examples:
- Make these words sound more casual and friendly.
- Rewrite this into a numbered list.
- Make this a little longer.
After you’ve made your selection or entered your custom prompt, a rewrite will be generated and appear in a panel over the slide.
You can insert the rewritten text that Gemini generated or refine it in various ways.
Howard Wen / Foundry
If you like the result, click the Replace button. The original text on your slide will be replaced with the generated version. Or you can click the down arrow to the right of the Replace button and click Insert. The generated text will be inserted where the cursor is.
If you want to refine the generated text, you can either type a prompt in the “Refine with a prompt” text entry box or click the Refine button again and select another rewrite option. To trigger Gemini to generate a different result from your original prompt, select Retry from the Refine menu.
Related:From Exposure to Exploitation: How AI Collapses Your Response Window
From Exposure to Exploitation: How AI Collapses Your Response Window
Jak se Sony popasuje s paměťovou krizí? Zřejmě zdraží hry a PlayStation Plus
UK to demand social platforms take down abusive intimate images within 48 hours
The UK is bracketing "intimate images shared without a victim's consent" along with terror and child sexual abuse material, and demanding that online platforms remove them within two days.…
- « první
- ‹ předchozí
- …
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- …
- následující ›
- poslední »



