Physics World https://physicsworld.com/a/new-on-chip-laser-fills-long-sought-after-green-gap/ Thu, 26 Sep 2024 08:33:43 +0100 en-GB Copyright by IOP Publishing Ltd and individual contributors hourly 1 https://wordpress.org/?v=6.6.1 Physics is full of captivating stories, from ongoing endeavours to explain the cosmos to ingenious innovations that shape the world around us. In the Physics World Stories podcast, Andrew Glester talks to the people behind some of the most intriguing and inspiring scientific stories. Listen to the podcast to hear from a diverse mix of scientists, engineers, artists and other commentators. Find out more about the stories in this podcast by visiting the Physics World website. If you enjoy what you hear, then also check out the Physics World Weekly podcast, a science-news podcast presented by our award-winning science journalists. Physics World false episodic Physics World dens.milne@ioppublishing.org Copyright by IOP Publishing Ltd and individual contributors Copyright by IOP Publishing Ltd and individual contributors podcast Physics World Stories Podcast Physics World https://physicsworld.com/wp-content/uploads/2021/01/PW-podcast-logo-STORIES-resized.jpg https://physicsworld.com TV-G Monthly New on-chip laser fills long sought-after green gap https://physicsworld.com/a/new-on-chip-laser-fills-long-sought-after-green-gap/ Thu, 26 Sep 2024 08:30:13 +0000 https://physicsworld.com/?p=116980 Devices will be important for applications in quantum sensing and computing, biology, underwater communications and display technologies

The post New on-chip laser fills long sought-after green gap appeared first on Physics World.

]]>
A series of visible-light colours generated by a microring resonator

On-chip lasers that emit green light are notoriously difficult to make. But researchers at the National Institute of Standards and Technology (NIST) and the NIST/University of Maryland Joint Quantum Institute may now have found a way to do just this, using a modified optical component known as a ring-shaped microresonator. Green lasers are important for applications including quantum sensing and computing, medicine and underwater communications.

In the new work, a research team led by Kartik Srinivasan modified a silicon nitride microresonator such that it was able to convert infrared laser light into yellow and green light. The researchers had already succeeded in using this structure to convert infrared laser light into red, orange and yellow wavelengths, as well as a wavelength of 560 nm, which lies at the edge between yellow and green light. Previously, however, they were not able to produce the full range of yellow and green colours to fill the much sought-after “green gap”.

More than 150 distinct green-gap wavelengths

To overcome this problem, the researchers made two modifications to their resonator. The first was to thicken it by 100 nm so that it could more easily generate green light with wavelengths down to 532 nm. Being able to produce such a short wavelength means that the entire green wavelength range is now covered, they say. In parallel, they modified the cladding surrounding the microresonator by etching away part of the silicon dioxide layer that it was fabricated on. This alteration made the output colours less sensitive to the dimension of the microring.

These changes meant that the team could produce more than 150 distinct green-gap wavelengths and could fine tune these too. “Previously, we could make big changes – red to orange to yellow to green – in the laser colours we could generate with OPO [optical parametric oscillation], but it was hard to make small adjustments within each of these colour bands,” says Srinivasan.

Like the previous microresonator, the new device works thanks to a process known as nonlinear wave mixing. Here, infrared light that is pumped into the ring-shaped structure is confined and guided within it. “This infrared light circulates around the ring hundreds of times due to its low loss, resulting in a build-up of intensity,” explains Srinivasan. “This high intensity enables the conversion of pump light to other wavelengths.”

Third-order optical parametric oscillation

“The purpose of the microring is to enable relatively modest, input continuous-wave laser light to build up in intensity to the point that nonlinear optical effects, which are often thought of as weak, become very significant,” says team member Xiyuan Lu.

The specific nonlinear optical process the researchers use is third-order optical parametric oscillation. “This works by taking light at a pump frequency np and creating one beam of light that’s higher in frequency (called the signal, at a frequency ns) and one beam that’s lower in frequency (called the idler, at a frequency ni),” explains first author Yi Sun. “There is a basic energy conservation requirement that 2np= ns+ ni.”

Simply put, this means that for every two pump photons that are used to excite the system, one signal photon and one idler photon are created, he tells Physics World.

Towards higher power and a broader range of colours

The NIST/University of Maryland team has been working on optical parametric oscillation as a way to convert near-infrared laser light to visible laser light for several years now. One of their main objectives was to fill the green gap in laser technology and fabricate frequency-converted lasers for quantum, biology and display applications.

“Some of the major applications we are ultimately targeting are high-end lasers, continuous-wave single-mode lasers covering the green gap or even a wider range of frequencies,” reveals team member Jordan Stone. “Applications include lasers for quantum optics, biology and spectroscopy, and perhaps laser/hologram display technologies.”

For now, the researchers are focusing on achieving higher power and a broader range of colours (perhaps even down to blue wavelengths). They also hope to make devices that can be better controlled and tuned. “We are also interested in laser injection locking with frequency-converted lasers, or using other techniques to further enhance the coherence of our lasers,” says Stone.

The work is detailed in Light: Science & Applications.

The post New on-chip laser fills long sought-after green gap appeared first on Physics World.

]]>
Research update Devices will be important for applications in quantum sensing and computing, biology, underwater communications and display technologies https://physicsworld.com/wp-content/uploads/2024/09/27-09-24-Color-Series-NIST.jpg
Researchers exploit quantum entanglement to create hidden images https://physicsworld.com/a/researchers-exploit-quantum-entanglement-to-create-hidden-images/ Wed, 25 Sep 2024 13:00:43 +0000 https://physicsworld.com/?p=116973 Encoding an image into the quantum correlations of photon pairs makes it invisible to conventional imaging techniques

The post Researchers exploit quantum entanglement to create hidden images appeared first on Physics World.

]]>
Encoding images in photon correlations

Ever since the double-slit experiment was performed, physicists have known that light can be observed as either a wave or a stream of particles. For everyday imaging applications, it is the wave-like aspect of light that manifests, with receptors (natural or artificial) capturing the information contained within the light waves to “see” the scene being observed.

Now, Chloé Vernière and Hugo Defienne from the Paris Institute of Nanoscience at Sorbonne University have used quantum correlations to encode an image into light such that it only becomes visible when particles of light (photons) are observed by a single-photon sensitive camera – otherwise the image is hidden from view.

Encoding information in quantum correlations

In a study described in Physical Review Letters, Vernière and Defienne managed to hide an image of a cat from conventional light measurement devices by encoding the information in quantum entangled photons, known as a photon-pair correlation. To achieve this, they shaped spatial correlations between entangled photons – in the form of arbitrary amplitude and phase objects – to encode image information within the pair correlation. Once the information is encoded into the photon pairs, it is undetectable by conventional measurements. Instead, a single-photon detector known as an electron-multiplied charge couple device (EMCCD) camera is needed to “show” the hidden image.

“Quantum entanglement is a fascinating phenomenon, central to many quantum applications and a driving concept behind our research,” says Hugo Defienne. “In our previous work, we demonstrated that, in certain cases, quantum correlations between photons are more resistant to external disturbances, such as noise or optical scattering, than classical light. Inspired by this, we wondered how this resilience could be leveraged for imaging. We needed to use these correlations as a support – a ‘canvas’ – to imprint our image, which is exactly what we’ve achieved in this work.”

How to hide an image

The researchers used a technique known as spontaneous parametric down-conversion (SPDC), which is used in many quantum optics experiments, to generate the entangled photons. SPDC is a nonlinear process that uses a nonlinear crystal (NLC) to split a single high-energy photon from a pump beam into two lower energy entangled photons. The properties of the lower energy photons are governed by the geometry and type of the NLC and the characteristics of the pump beam.

In this study, the researchers used a continuous-wave laser that produced a collimated beam of horizontally polarized 405 nm light to illuminate a standing cat-shaped mask, which was then Fourier imaged onto an NLC using a lens. The spatially entangled near-infrared (810 nm) photons, produced after passing through the NLC, were then detected using another lens and the EMCCD.

This SPDC process produces an encoded image of a cat. This image does not appear on regular camera film and only becomes visible when the photons are counted one by one using the EMCCD. This allowed the image of the cat to be “hidden” in light and unobservable by traditional cameras.

“It is incredibly intriguing that an object’s image can be completely hidden when observed classically with a conventional camera, but then when you observe it ‘quantumly’ by counting the photons one by one and examining their correlations, you can actually see it,” says Vernière, a PhD student on the project. “For me, it is a completely new way of doing optical imaging, and I am hopeful that many powerful applications will emerge from it.”

What’s next?

This research has extended on previous work and Hugo says that the team’s next goal is to show that this new method of imaging has practical applications and is not just a scientific curiosity. “We know that images encoded in quantum correlations are more resistant to external disturbances – such as noise or scattering – than classical light. We aim to leverage this resilience to improve imaging depth in scattering media.”

When asked about the applications that this development could impact, Hugo tells Physics World: “We hope to reduce sensitivity to scattering and achieve deeper imaging in biological tissues or longer-range communication through the atmosphere than traditional technologies allow. Even though we are still far from it, this could potentially improve medical diagnostics or long-range optical communications in the future.”

The post Researchers exploit quantum entanglement to create hidden images appeared first on Physics World.

]]>
Research update Encoding an image into the quantum correlations of photon pairs makes it invisible to conventional imaging techniques https://physicsworld.com/wp-content/uploads/2024/09/25-09-24-hidden-image-featured.jpg
Ambipolar electric field helps shape Earth’s ionosphere https://physicsworld.com/a/ambipolar-electric-field-helps-shape-earths-ionosphere/ Wed, 25 Sep 2024 07:53:39 +0000 https://physicsworld.com/?p=116952 Scientists make first ever measurements of a planet-wide field that could be as fundamental as gravity and magnetic fields

The post Ambipolar electric field helps shape Earth’s ionosphere appeared first on Physics World.

]]>
A drop in electric potential of just 0.55 V measured at altitudes of between 250–768 km in the Earth’s atmosphere above the North and South poles could be the first direct measurement of our planet’s long-sought after electrostatic field. The measurements, from NASA’s Endurance mission, reveal that this field is important for driving how ions escape into space and shaping the upper layer of the atmosphere, known as the ionosphere.

Researchers first predicted the existence of the ambipolar electric field in the 1960s as the first spacecraft flying over the Earth’s poles detected charged particles (including positively-charged hydrogen and oxygen ions) flowing out from the atmosphere. The theory of a planet-wide electric field was developed to directly explain this “polar wind”, but the effects of this field were thought to be too weak to be detectable. Indeed, if the ambipolar field was the only mechanism driving the electrostatic field of Earth, then the resulting electric potential drop across the exobase transition region (which lies at an altitude of between 200–780 km) could be as low as about 0.4 V.

A team of researchers led by Glyn Collinson at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, has now succeeded in measuring this field for the first time thanks to a new instrument called a photoelectron spectrometer, which they developed. The device was mounted on the Endurance rocket, which was launched from Svalbard in the  Norwegian Arctic in May 2022. “Svalbard is the only rocket range in the world where you can fly through the polar wind and make the measurements we needed,” says team member Suzie Imber, who is a space physicist at the University of Leicester, UK.

Just the “right amount”

The spacecraft reached an altitude of 768.03 km, where it remained for 19 min while the onboard spectrometer measured the energies of electrons there every 10 seconds. It measured a drop in electric potential of 0.55 V±0.09 V over an altitude range of 258–769 km. While tiny, this is just the “right amount” to explain the polar wind without any other atmospheric effects, says Collinson.

The researchers showed that the ambipolar field, which is generated exclusively by the outward pressure of ionospheric electrons, increases the “scale height” of the ionosphere by as much as 271% (from a height of 77.0 km to a height of 208.9 km). This part of the atmosphere therefore remains denser to greater heights than it would if the field did not exist. This is because the field increases the supply of cold oxygen ions (O+) to the magnetosphere (that is, near the peak at 768 km) by more than 3.8%, so counteracting the effects of other mechanisms (such as wave-particle interactions) that can heat and accelerate these particles to velocities high enough for them to escape into space. The field also probably explains why the magnetosphere is made up primarily of cold hydrogen ions (H+).

The ambipolar field could be as fundamental for our planet as its gravity and magnetic fields, says Collinson, and it may even have helped shape how the atmosphere evolved. Similar fields might also exist on other planets in the solar system with an atmosphere, including Venus and Mars. “Understanding the forces that cause Earth’s atmosphere to slowly leak to space may be important for revealing what makes Earth habitable and why we’re all here,” he tells Physics World. “It’s also crucial to accurately forecast the impact of geomagnetic storms and ‘space weather’.”

Looking forward, the scientists say they would like to make further measurements of the Earth’s ambipolar field in the future. Happily, they recently received endorsement for a follow-up rocket – called Resolute – to do just this.

The post Ambipolar electric field helps shape Earth’s ionosphere appeared first on Physics World.

]]>
Research update Scientists make first ever measurements of a planet-wide field that could be as fundamental as gravity and magnetic fields https://physicsworld.com/wp-content/uploads/2024/09/endurance-launch-photo.jpg
Light-absorbing dye turns skin of a live mouse transparent https://physicsworld.com/a/light-absorbing-dye-turns-skin-of-a-live-mouse-transparent/ Tue, 24 Sep 2024 15:00:54 +0000 https://physicsworld.com/?p=116964 The technique could be used to observe a wide range of deep-seated biological structures and activity

The post Light-absorbing dye turns skin of a live mouse transparent appeared first on Physics World.

]]>
One of the difficulties when trying to image biological tissue using optical techniques is that tissue scatters light, which makes it opaque. This scattering occurs because the different components of tissue, such as water and lipids, have different refractive indices, and it limits the depth at which light can penetrate.

A team of researchers at Stanford University in the US has now found that a common water-soluble yellow dye (among several other dye molecules) that strongly absorbs near-ultraviolet and blue light can help make biological tissue transparent in just a few minutes, thus allowing light to penetrate more deeply. In tests on mice skin, muscle and connective tissue, the team used the technique to observe a wide range of deep-seated structures and biological activity.

In their work, the research team – led by Zihao Ou (now at The University of Texas at Dallas), Mark Brongersma and Guosong Hong – rubbed the common food dye tartrazine, which is yellow/red in colour, onto the abdomen, scalp and hindlimbs of live mice. By absorbing light in the blue part of the spectrum, the dye altered the refractive index of the water in the treated areas at red-light wavelengths, such that it more closely matched that of lipids in this part of the spectrum. This effectively reduced the refractive-index contrast between the water and the lipids and allowed the biological tissue to appear more transparent at this wavelength, albeit tinged with red.

In this way, the researchers were able to visualize internal organs, such as the liver, small intestine and bladder, through the skin without requiring any surgery. They were even able to observe fluorescent protein-labelled enteric neurons in the abdomen and monitor the movements of these nerve cells. This enabled them to generate maps showing different movement patterns in the gut during digestion. They were also able to visualize blood flow in the rodents’ brains and the fine structure of muscle sarcomere fibres in their hind limbs.

Reversible effect

The skin becomes transparent in just a few minutes and the effect can be reversed by simply rinsing off the dye.

So far, this “optical clearing” study has only been conducted on animals. But if extended to humans, it could offer a variety of benefits in biology, diagnostics and even cosmetics, says Hong. Indeed, the technique could help make some types of invasive biopsies a thing of the past.

“For example, doctors might be able to diagnose deep-seated tumours by simply examining a person’s tissue without the need for invasive surgical removal. It could potentially make blood draws less painful by helping phlebotomists easily locate veins under the skin and could also enhance procedures like laser tattoo removal by allowing more precise targeting of the pigment beneath the skin,” Hong explains. “If we could just look at what’s going on under the skin instead of cutting into it, or using radiation to get a less than clear look, we could change the way we see the human body.”

Hong tells Physics World that the collaboration originated from a casual conversation he had with Brongersma, at a café on Stanford’s campus during the summer of 2021. “Mark’s lab specializes in nanophotonics while my lab focuses on new strategies for enhancing deep-tissue imaging of neural activity and light delivery for optogenetics. At the time, one of my graduate students, Nick Rommelfanger (third author of the current paper), was working on applying the ‘Kramers-Kronig’ relations to investigate microwave–brain interactions. Meanwhile, my postdoc Zihao Ou (first author of this paper) had been systematically screening a variety of dye molecules to understand their interactions with light.”

Tartrazine emerged as the leading candidate, says Hong. “This dye showed intense absorption in the near-ultraviolet/blue spectrum (and thus strong enhancement of refractive index in the red spectrum), minimal absorption beyond 600 nm, high water solubility and excellent biocompatibility, as it is an FD&C-approved food dye.”

“We realized that the Kramers-Kronig relations could be applied to the resonance absorption of dye molecules, which led me to ask Mark about the feasibility of matching the refractive index in biological tissues, with the aim of reducing light scattering,” Hong explains. “Over the past three years, both our labs have had numerous productive discussions, with exciting results far exceeding our initial expectations.”

The researchers say they are now focusing on identifying other dye molecules with greater efficiency in achieving tissue transparency. “Additionally, we are exploring methods for cells to express intensely absorbing molecules endogenously, enabling genetically encoded tissue transparency in live animals,” reveals Hong.

The study is detailed in Science.

The post Light-absorbing dye turns skin of a live mouse transparent appeared first on Physics World.

]]>
Research update The technique could be used to observe a wide range of deep-seated biological structures and activity https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_Zihao-Ou-Lab-1a.jpg
Science thrives on constructive and respectful peer review https://physicsworld.com/a/science-thrives-on-constructive-and-respectful-peer-review/ Tue, 24 Sep 2024 12:42:06 +0000 https://physicsworld.com/?p=116969 Unhelpful or rude feedback can shake the confidence of early career researchers

The post Science thrives on constructive and respectful peer review appeared first on Physics World.

]]>
It is Peer Review Week and celebrations are well under way at IOP Publishing (IOPP), which brings you the Physics World Weekly podcast.

Reviewer feedback to authors plays a crucial role in the peer-review process, boosting the quality of published papers to the benefit of authors and the wider scientific community. But sometimes authors receive very unhelpful or outright rude feedback about their work. These inappropriate comments can shake the confidence of early career researchers, and even dissuade them from pursuing careers in science.

Our guest in this episode is Laura Feetham-Walker, who is reviewer engagement manager at IOPP. She explains how the publisher is raising awareness of the importance of constructive and respectful peer review feedback and how innovations can help to create a positive peer review culture.

As part of the campaign, IOPP asked some leading physicists to recount the worst reviewer comments that they have received – and Feetham-Walker shares some real shockers in the podcast.

IOPP has created a video called “Unprofessional peer reviews can harm science” in which leading scientists share inappropriate reviews that they have received.

The publisher also offers a  Peer Review Excellence  training and certification programme, which equips early-career researchers in the physical sciences with the skills to provide constructive feedback.

The post Science thrives on constructive and respectful peer review appeared first on Physics World.

]]>
Podcasts Unhelpful or rude feedback can shake the confidence of early career researchers https://physicsworld.com/wp-content/uploads/2024/09/Laura-Feetham-Walker.jpg
Convection enhances heat transport in sea ice https://physicsworld.com/a/convection-enhances-heat-transport-in-sea-ice/ Tue, 24 Sep 2024 08:42:25 +0000 https://physicsworld.com/?p=116946 New mathematical framework could allow for more accurate climate models

The post Convection enhances heat transport in sea ice appeared first on Physics World.

]]>
The thermal conductivity of sea ice can significantly increase when convective flow is present within the ice. This new result, from researchers at Macquarie University, Australia, and the University of Utah and Dartmouth College, both in the US, could allow for more accurate climate models – especially since current global models only account for temperature and salinity and not convective flow.

Around 15% of the ocean’s surface will be covered with sea ice at some time in a year. Sea ice is a thin layer that separates the atmosphere and the ocean and it is responsible for regulating heat exchange between the two in the polar regions of our planet. The thermal conductivity of sea ice is a key parameter in climate models. It has proved difficult to measure, however, because of its complex structure, made up of ice, air bubbles and brine inclusions, which form as the ice freezes from the surface of the ocean to deeper down. Indeed, sea ice can be thought of as being a porous composite material and is therefore very sensitive to changes in temperature and salinity.

The salty liquid within the brine inclusions is heavier than fresh ocean water. This results in convective flow within the ice, creating channels through which liquid can flow out, explains applied mathematician Noa Kraitzman at Macquarie, who led this new research effort. “Our new framework characterizes enhanced thermal transport in porous sea ice by combining advection-diffusion processes with homogenization theory, which simplifies complex physical properties into an effective bulk coefficient.”

Thermal conductivity of sea ice can increase by a factor of two to three

The new work builds on a 2001 study in which researchers observed an increase in thermal conductivity in sea ice at warmer temperatures. “In our calculations, we had to derive new bounds on the effective thermal conductivity, while also accounting for complex, two-dimensional convective fluid flow and developing a theoretical model that could be directly compared with experimental measurements in the field,” explains Kraitzman. “We employed Padé approximations to obtain the required bounds and parametrized the Péclet number specifically for sea ice, considering it as a saturated rock.”

Padé approximations are routinely used to approximate a function by a rational analysis of given order and the Péclet number is a dimensionless parameter defined as the ratio between the rate of advection to the rate of diffusion.

The results suggest that the effective thermal conductivity of sea ice can increase by a factor of two to three because of conductive flow, especially in the lower, warmer sections of the ice, where temperature and the ice’s permeability favour convection, Kraitzman tells Physics World. “This enhancement is mainly confined to the bottom 10 cm during the freezing season, when convective flows are present within the sea ice. Incorporating these bounds into global climate models could improve their ability to predict thermal transport through sea ice, resulting in more accurate predictions of sea ice melt rates.”

Looking forward, Kraitzman and colleagues say they now hope to acquire additional field measurements to refine and validate their model. They also want to extend their mathematical framework to include more general 3D flows and incorporate the complex fluid exchange processes that exist between ocean and sea ice. “By addressing these different areas, we aim to improve the accuracy and applicability of our model, particularly in ocean-sea ice interaction models, aiming for a better understanding of polar heat exchange processes and their global impacts,” says Kraitzman.

The present work is detailed in Proceedings of the Royal Society A.

The post Convection enhances heat transport in sea ice appeared first on Physics World.

]]>
Research update New mathematical framework could allow for more accurate climate models https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_ProfKG_Homogenization_2024-1600x1000-1.jpg
Short-range order always appears in new type of alloy https://physicsworld.com/a/short-range-order-always-appears-in-new-type-of-alloy/ Mon, 23 Sep 2024 13:00:48 +0000 https://physicsworld.com/?p=116932 New insights into hidden atomic ordering could help in the development of more robust alloys

The post Short-range order always appears in new type of alloy appeared first on Physics World.

]]>
Short-range order plays an important role in defining the properties and performance of “multi-principal element alloys” (MPEAs), but the way in which this order develops is little understood, making it difficult to control. In a surprising new discovery, a US-based research collaboration has have found that this order exists regardless of how MPEAs are processed. The finding will help scientists develop more effective ways to improve the properties of these materials and even tune them for specific applications, especially those with demanding conditions.

MPEAs are a relatively new type of alloy and consist of three or more components in nearly equal proportions. This makes them very different to conventional alloys, which are made from just one or two principal elements with trace elements added to improve their performance.

In recent years, MPEAs have spurred a flurry of interest thanks to their high strength, hardness and toughness over temperature ranges at which traditional alloys, such as steel, can fail. They could also be more resistant to corrosion, making them promising for use in extreme conditions, such as in power plants, or aerospace and automotive technologies, to name but three.

Ubiquitous short-range order

MPEAs were originally thought of as being random solid solutions with the constituent elements being haphazardly dispersed, but recent experiments have shown that this is not the case.

The researchers – from Penn State University, the University of California, Irvine, the University of Massachusetts, Amherst, and Brookhaven National Laboratory – studied the cobalt/chromium/nickel (CoCrNi) alloy, one of the best-known examples of an MPEA. This face-centred cubic (FCC) alloy boasts the highest fracture toughness for an alloy at liquid helium temperatures ever recorded.

Using an improved transmission electron microscopy characterization technique combined with advanced three-dimensional printing and atomistic modelling, the team found that short-range order, which occurs when atoms are arranged in a non-random way over short distances, appears in three CoCrNi-based FCC MPEAs under a variety of processing and thermal treatment conditions.

Their computational modelling calculations also revealed that local chemical order forms in the liquid–solid interface when the alloys are rapidly cooled, even at a rate of 100 billion °C/s. This effect comes from the rapid atomic diffusion in the supercooled liquid, at rates equal to or even greater than the rate of solidification. Short-range order is therefore an inherent characteristic of FCC MPEAs, the researchers say.

The new findings are in contrast to the previous notion that the elements in MPEAs arrange themselves randomly in the crystal lattice if they cool rapidly during solidification. It also refutes the idea that short-range order develops mainly during annealing (a process in which heating and slow cooling are used to improve material properties such as strength, hardness and ductility).

Short-range order can affect MPEA properties, such as strength or resistance to radiation damage. The researchers, who report their work in Nature Communications, say they now plan to explore how corrosion and radiation damage affect the short-range order in MPEAs.

“MPEAs hold promise for structural applications in extreme environments. However, to facilitate their eventual use in industry, we need to have a more fundamental understanding of the structural origins that give rise to their superior properties,” says team co-lead Yang Yang, who works in the engineering science and mechanics department at Penn State.

The post Short-range order always appears in new type of alloy appeared first on Physics World.

]]>
Research update New insights into hidden atomic ordering could help in the development of more robust alloys https://physicsworld.com/wp-content/uploads/2024/09/SRO-photo-CFN-image-contest.jpg
We should treat our students the same way we would want our own children to be treated https://physicsworld.com/a/we-should-treat-our-students-the-same-way-we-would-want-our-own-children-to-be-treated/ Mon, 23 Sep 2024 10:00:38 +0000 https://physicsworld.com/?p=116687 Pete Vukusic says that students' positive experiences matter profoundly

The post We should treat our students the same way we would want our own children to be treated appeared first on Physics World.

]]>
“Thank goodness I don’t have to teach anymore.” These were the words spoken by a senior colleague and former mentor upon hearing about the success of their grant application. They had been someone I had respected. Such comments, however, reflect an attitude that persists across many UK higher-education (HE) science departments. Our departments’ students, our own children even, studying across the UK at HE institutes deserve far better.

It is no secret in university science departments that lecturing, tutoring and lab supervision are perceived by some colleagues to be mere distractions from what they consider their “real” work and purpose to be. These colleagues may evasively try to limit their exposure to teaching, and their commitment to its high-quality delivery. This may involve focusing time and attention solely on research activities or being named on as many research grant applications as possible.

University workload models set time aside for funded research projects, as they should. Research grants provide universities with funding that contributes to their finances and are an undeniably important revenue stream. However, an aversion to – or flagrant avoidance of – teaching by some colleagues is encountered by many who have oversight and responsibility for the organization and provision of education within university science departments.

It is also a behaviour and mindset that is recognized by students, and which negatively impacts their university experience. Avoidance of teaching displayed, and sometimes privately endorsed, by senior or influential colleagues in a department can also shape its culture and compromise the quality of education that is delivered. Such attitudes have been known to diffuse into a department’s environment, negatively impacting students’ experiences and further learning. Students certainly notice and are affected by this.

The quality of physics students’ experiences depends on many factors. One is the likelihood of graduating with skills that make them employable and have successful careers. Others include: the structure, organization and content of their programme; the quality of their modules and the enthusiasm and energy with which they are delivered; the quality of the resources to which they have access; and the extent to which their individual learning needs are supported.

We should always be present and dispense empathy, compassion and a committed enthusiasm to support and enthral our students with our teaching.

In the UK, the quality of departments’ and institutions’ delivery of these and other components has been assessed since 2005 by the National Student Survey (NSS). Although imperfect and continuing to evolve, it is commissioned every year by the Office for Students on behalf of UK funding and regulatory bodies and is delivered independently by Ipsos.

The NSS can be a helpful tool to gather final-year students’ opinions and experiences about their institutions and degree programmes. Publication of the NSS datasets in July each year should, in principle, provide departments and institutions with the information they need to recognize their weaknesses and improve their subsequent students’ experiences. They would normally be motivated to do this because of the direct impact NSS outcomes have on institutions’ league table positions. These league tables can tangibly impact student recruitment and, therefore, an institution’s finances.

My sincerely held contention, however, communicated some years ago to a red-faced finger-wagging senior manager during a fraught meeting, is this. We should ignore NSS outcomes. They don’t, and shouldn’t, matter. This is a bold statement; career-ending, even. I articulated that we and all our colleagues should instead wholeheartedly strive to treat our students as we would want our own children, or our younger selves, to be treated, across every academic aspect and learning-related component of their journey while they are with us. This would be the right and virtuous thing to do.  In fact, if we do this, the positive NSS outcomes would take care of themselves.

Academic guardians

I have been on the frontline of university teaching, research, external examining and education leadership for close to 30 years. My heartfelt counsel, formed during this journey, is that our students’ positive experiences matter profoundly. They matter because, in joining our departments and committing three or more years and many tens of thousands of pounds to us, our students have placed their fragile and uncertain futures and aspirations into our hands.

We should feel privileged to hold this position and should respond to and collaborate with them positively, always supportively and with compassion, kindness and empathy. We should never be the traditionally tough and inflexible guardians of a discipline that is academically demanding, and which can, in a professional physics academic career, be competitively unyielding. That is not our job. Our roles, instead, should be as our students’ academic guardians, enthusiastically taking them with us across this astonishing scientific and mathematical world; teaching, supporting and enabling wherever we possibly can.

A narrative such as this sounds fantastical. It seems far removed from the rigours and tensions of day-in, day-out delivery of lecture modules, teaching labs and multiple research targets. But the metaphor it represents has been the beating heart of the most successfully effective, positive and inclusive learning environments I have encountered in UK and international HE departments during my long academic and professional journey.

I urge physics and science colleagues working in my own and other UK HE departments to remember and consider what it can be like to be an anxious or confused student, whose cognitive processes are still developing, whose self-confidence may be low and who may, separately, be facing other challenges to their circumstances. We should then behave appropriately. We should always be present and dispense empathy, compassion and a committed enthusiasm to support and enthral our students with our teaching. Ego has no place. We should show kindness, patience, and a willingness to engage them in a community of learning, framed by supportive and inclusive encouragement. We should treat our students the way we would want our own children to be treated.

The post We should treat our students the same way we would want our own children to be treated appeared first on Physics World.

]]>
Opinion and reviews Pete Vukusic says that students' positive experiences matter profoundly https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Forum-Vukusic-teacher-and-students-in-3D-printing-lab-875671948-iStock_monkeybusinessimages.jpg
Working in quantum tech: where are the opportunities for success? https://physicsworld.com/a/working-in-quantum-tech-where-are-the-opportunities-for-success/ Mon, 23 Sep 2024 09:53:55 +0000 https://physicsworld.com/?p=116928 Quantum professionals describe the emerging industry, and the skills required to thrive

The post Working in quantum tech: where are the opportunities for success? appeared first on Physics World.

]]>

The quantum industry in booming. An estimated $42bn was invested in the sector in 2023 and is projected to rise to $106 billion by 2040. In this episode of Physics World Stories, two experts from the quantum industry share their experiences, and give advice on how to enter this blossoming sector. Quantum technologies – including computing, communications and sensing – could vastly outperform today’s technology for certain applications, such as efficient and scalable artificial intelligence.

Our first guest is Matthew Hutchings, chief product officer and co-founder of SEEQC. Based in New York and with facilities in Europe, SEEQC is developing a digital quantum computing platform with a broad industrial market due to its combination of classical and quantum technologies. Hutchings speaks about the increasing need for engineering positions in a sector that to date has been dominated by workers with a PhD in quantum information science.

The second guest is Araceli Venegas-Gomez, founder and CEO of QURECA, which helps to train and recruit individuals, while also providing business development services. Venegas-Gomez’s journey into the sector began with her reading about quantum mechanics as a hobby while working in aerospace engineering. In launching QURECA, she realized there was an important gap to be filled between quantum information science and business – two communities that have tended to speak entirely different languages.

Get even more tips and advice in the recent feature article ‘Taking the leap – how to prepare for your future in the quantum workforce’.

The post Working in quantum tech: where are the opportunities for success? appeared first on Physics World.

]]>
Quantum professionals describe the emerging industry, and the skills required to thrive Quantum professionals describe the emerging industry, and the skills required to thrive Physics World Working in quantum tech: where are the opportunities for success? full false 45:53 Podcasts Quantum professionals describe the emerging industry, and the skills required to thrive https://physicsworld.com/wp-content/uploads/2024/09/Quantum-globe-1169711469-iStock_metamorworks-scaled.jpg
Thermal dissipation decoheres qubits https://physicsworld.com/a/thermal-dissipation-decoheres-qubits/ Mon, 23 Sep 2024 08:04:21 +0000 https://physicsworld.com/?p=116942 Superconducting quantum bits release their energy into their environment as photons

The post Thermal dissipation decoheres qubits appeared first on Physics World.

]]>
How does a Josephson junction, which is the basic component of a superconducting quantum bit (or qubit), release its energy into the environment? It is radiated as photons, according to new experiments by researchers at Aalto University Finland in collaboration with colleagues from Spain and the US who used a thermal radiation detector known as a bolometer to measure this radiation directly in the electrical circuits holding the qubits. The work will allow for a better understanding of the loss and decoherence mechanism in qubits that can disrupt and destroy quantum information, they say.

Quantum computers make use of qubits to store and process information. The most advanced quantum computers to date – including those being developed by IT giants Google and IBM – use qubits made from superconducting electronic circuits operating at very low temperatures. To further improve qubits, researchers need to better understand how they dissipate heat, says Bayan Karimi, who is the first author of a paper describing the new study. This heat transfer is a form of decoherence – a phenomenon by which the quantum states in qubits revert to behaving like classical 0s and 1s and lose the precious quantum information they contain.

“An understanding of dissipation in a single Josephson junction coupled to an environment remains strikingly incomplete, however,” she explains. “Today, a junction can be modelled and characterized without a detailed knowledge of, for instance, where energy is dissipated in a circuit. But improving design and performance will require a more complete picture.”

Physical environment is important

In the new work, Karimi and colleagues used a nano-bolometer to measure the very weak radiation emitted from a Josephson junction over a broad range of frequencies up to 100::GHz. The researchers identified several operation regimes depending on the junction bias, each with a dominant dissipation mechanism. “The whole frequency-dependent power and shape of the current-voltage characteristics can be attributed to the physical environment of the junction,” says Jukka Pekola, who led this new research effort.

The thermal detector works by converting radiation into heat and is composed of an absorber (made of copper), the temperature of which changes when it detects the radiation. The researchers measure this variation using a sensitive thermometer, comprising a tunnel junction between the copper absorber and a superconductor.

“Our work will help us better understand the nature of heat dissipation of qubits that can disrupt and destroy quantum information and how these coherence losses can be directly measured as thermal losses in the electrical circuit holding the qubits,” Karimi tells Physics World.

In the current study, which is detailed in Nature Nanotechnology, the researchers say they measured continuous energy release from a Josephson junction when it was biased by a voltage. They now aim to find out how their detector can sense single heat loss events when the Josephson junction or qubit releases energy. “At best, we will be able to count single photons,” says Pekola.

The post Thermal dissipation decoheres qubits appeared first on Physics World.

]]>
Research update Superconducting quantum bits release their energy into their environment as photons https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_Picture2.jpg
The physics of cycling’s ‘Everesting’ challenge revealed https://physicsworld.com/a/the-physics-of-cyclings-everesting-challenge-revealed/ Fri, 20 Sep 2024 15:00:04 +0000 https://physicsworld.com/?p=116931 Everesting involves a cyclist riding up and down a given hill multiple times until the ascent totals the elevation of Mount Everest – or 8848 m

The post The physics of cycling’s ‘Everesting’ challenge revealed appeared first on Physics World.

]]>
“Everesting” involves a cyclist riding up and down a given hill multiple times until the ascent totals the elevation of Mount Everest – or 8848 m.

The challenge became popular during the COVID-19 lockdowns and in 2021 the Irish cyclist Ronan McLaughlin was reported to have set a new “Everesting” record of 6:40:54. This was almost 20 minutes faster than the previous world record of 6:59:38 set by the US’s Sean Gardner in 2020.

Yet a debate soon ensued on social media concerning the significant tailwind that day of 5.5 meters per second, which they claimed would have helped McLaughlin to climb the hill multiple times.

But did it? To investigate, Martin Bier, a physicist at East Carolina University in North Carolina, has now analysed what effect air resistance might have when cycling up and down a hill.

“Cycling uses ‘rolling’, which is much smoother and faster, and more efficient [than running],” notes Bier. “All of the work is purely against gravity and friction.”

Bier calculated that a tailwind does help slightly when going uphill, but most of the work when doing so is generating enough power to overcome gravity rather than air resistance.

When coming downhill, however, any headwind becomes significant given that the force of air resistance increases with the square of the cyclist’s speed. The headwind can then have a huge effect, causing a significant reduction in speed.

So, while a tailwind going up is negligible the headwind coming down certainly won’t be. “There are no easy tricks,” Bier adds. “If you want to be a better Everester, you need to lose weight and generate more [power]. This is what matters — there’s no way around it.”

The post The physics of cycling’s ‘Everesting’ challenge revealed appeared first on Physics World.

]]>
Blog Everesting involves a cyclist riding up and down a given hill multiple times until the ascent totals the elevation of Mount Everest – or 8848 m https://physicsworld.com/wp-content/uploads/2024/09/cyclists-silhouette-286024589-Shutterstock_LittlePerfectStock.jpg
Air-powered computers make a comeback https://physicsworld.com/a/air-powered-computers-make-a-comeback/ Fri, 20 Sep 2024 11:00:44 +0000 https://physicsworld.com/?p=116911 Novel device contains a pneumatic logic circuit made from 21 microfluidic valves

The post Air-powered computers make a comeback appeared first on Physics World.

]]>
A device containing a pneumatic logic circuit made from 21 microfluidic valves could be used as a new type of air-powered computer that does not require any electronic components. The device could help make a wide range of important air-powered systems safer and less expensive, according to its developers at the University of California at Riverside.

Electronic computers rely on transistors to control the flow of electricity. But in the new air-powered computer, the researchers use tiny valves instead of transistors to control the flow of air rather than electricity. “These air-powered computers are an example of microfluidics, a decades-old field that studies the flow of fluids (usually liquids but sometimes gases) through tiny networks of channels and valves,” explains team leader William Grover, a bioengineer at UC Riverside.

By combining multiple microfluidic valves, the researchers were able to make air-powered versions of standard logic gates. For example, they combined two valves in a row to make a Boolean AND gate. This gate works because air will flow through the two valves only if both are open. Similarly, two valves connected in parallel make a Boolean OR gate. Here, air will flow if either one or the other of the valves is open.

Complex logic circuits

Combining an increasing number of microfluidic valves enables the creation of complex air-powered logic circuits. In the new study, detailed in Device, Grover and colleagues made a device that uses 21 microfluidic valves to perform a parity bit calculation – an important calculation employed by many electronic computers to detect errors and other problems.

The novel air-powered computer detects differences in air pressure flowing through the valves to count the number of bits. If there is an error, it outputs an error signal by blowing a whistle. As a proof-of-concept, the researchers used their device to detect anomalies in an intermittent pneumatic compression (IPC) device – a leg sleeve that fills with air and regularly squeezes a patient’s legs to increase blood flow, with the aim of preventing blood clots that could lead to strokes. Normally, these machines are monitored using electronic equipment.

“IPC devices can save lives, but they aren’t as widely employed as they could be,” says Grover. “In part, this is because they’re so expensive. We wanted to see if we could reduce their cost by replacing some of their electronic hardware with pneumatic logic.”

Air’s viscosity is important

Air-powered computers behave very similarly, but not quite identically to electronic computers, Grover adds. “For example, we can often take an existing electronic circuit and make an air-powered version of it and it’ll work just fine, but at other times the air-powered device will behave completely differently and we have to tweak the design to make it function.”

The variations between the two types of computers come down to one important physical difference between electricity and air, he explains: electricity does not have viscosity, but air does. “There are also lots of little design details that are of little consequence in electronic circuits but which become important in pneumatic circuits because of air’s viscosity. This makes our job a bit harder, but it also means we can do things with pneumatic logic that aren’t possible – or are much harder to do – with electronic logic.”

In this work, the researchers focused on biomedical applications for their air-powered computer, but they say that this is just the “tip of the iceberg” for this technology. Air-powered systems are ubiquitous, from the brakes on a train, to assembly-line robots and medical ventilators, to name but three. “By using air-powered computers to operate and monitor these systems, we could make these important systems more affordable, more reliable and safer,” says Grover.

“I have been developing air-powered logic for around 20 years now, and we’re always looking for new applications,” he tells Physics World. “What is more, there are areas in which they have advantages over conventional electronic computers.”

One specific application of interest is moving grain inside silos, he says. These enormous structures hold grain and other agricultural products and people often have to climb inside to spread out the grain – an extremely dangerous task because they can become trapped and suffocate.

“Robots could take the place of humans here, but conventional electronic robots could generate electronic sparks that could create flammable dust inside the silo,” Grover explains. “An air-powered robot, on the other hand, would work inside the silo without this risk. We are thus working on an air-powered ‘brain’ for such a robot to keep people out of harm’s way.”

Air-powered computers aren’t a new idea, he adds. Decades ago, there was a multitude of devices being designed that ran on water or air to perform calculations. Air-powered computers fell out of favour, however, when transistors and integrated circuits made electronic computers feasible. “We’ve therefore largely forgotten the history of computers that ran on things other than electricity. Hopefully, our new work will encourage more researchers to explore new applications for these devices.”

The post Air-powered computers make a comeback appeared first on Physics World.

]]>
Research update Novel device contains a pneumatic logic circuit made from 21 microfluidic valves https://physicsworld.com/wp-content/uploads/2024/09/20-09-24-air-powered-circuit.jpg
Quantum hackathon makes new connections https://physicsworld.com/a/quantum-hackathon-makes-new-connections/ Fri, 20 Sep 2024 08:40:32 +0000 https://physicsworld.com/?p=116848 The 2024 UK Quantum Hackathon set new standards for engagement and collaboration

The post Quantum hackathon makes new connections appeared first on Physics World.

]]>
It is said that success breeds success, and that’s certainly true of the UK’s Quantum Hackathon – an annual event organized by the National Quantum Computing Centre (NQCC) that was held in July at the University of Warwick. Now in its third year, the 2024 hackathon attracted 50% more participants from across the quantum ecosystem, who tackled 13 use cases set by industry mentors from the private and public sectors. Compared to last year’s event, participants were given access to a greater range of technology platforms, including software control systems as well as quantum annealers and physical processors, and had an additional day to perfect and present their solutions.

The variety of industry-relevant problems and the ingenuity of the quantum-enabled solutions were clearly evident in the presentations on the final day of the event. An open competition for organizations to submit their problems yielded use cases from across the public and private spectrum, including car manufacturing, healthcare and energy supply. While some industry partners were returning enthusiasts, such as BT and Rolls Royce, newcomers to the hackathon included chemicals firm Johnson Matthey, Aioi R&D Lab (a joint venture between Oxford University spin-out Mind Foundry and the global insurance brand Aioi Nissay Dowa) and the North Wales Police.

“We have a number of problems that are beyond the scope of standard artificial intelligence (AI) or neural networks, and we wanted to see whether a quantum approach might offer a solution,” says Alastair Hughes, lead for analytics and AI at North Wales Police. “The results we have achieved within just two days have proved the feasibility of the approach, and we will now be looking at ways to further develop the model by taking account of some additional constraints.”

The specific use case set by Hughes was to optimize the allocation of response vehicles across North Wales, which has small urban areas where incidents tend to cluster and large swathes of countryside where the crime rate is low. “Our challenge is to minimize response times without leaving some of our communities unprotected,” he explains. “At the moment we use a statistical process that needs some manual intervention to refine the configuration, which across the whole region can take a couple of months to complete. Through the hackathon we have seen that a quantum neural network can deliver a viable solution.”

Teamwork

While Hughes had no prior experience with using quantum processors, some of the other industry mentors are already investigating the potential benefits of quantum computing for their businesses. At Rolls Royce, for example, quantum scientist Jarred Smalley is working with colleagues to investigate novel approaches for simulating complex physical processes, such as those inside a jet engine. Smalley has mentored a team at all three hackathons, setting use cases that he believes could unlock a key bottleneck in the simulation process.

The hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors

“Some of our crazy problems are almost intractable on a supercomputer, and from that we extract a specific set of processes where a quantum algorithm could make a real impact,” he says. “At Rolls Royce our research tends to be focused on what we could do in the future with a fault-tolerant quantum computer, and the hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors.”

Since the first hackathon in 2022, Smalley says that there has been an improvement in the size and capabilities of the hardware platforms. But perhaps the biggest advance has been in the software and algorithms available to help the hackers write, test and debug their quantum code. Reflecting that trend in this year’s event was the inclusion of software-based technology providers, such as Q-CTRL’s Fire Opal and Classiq, that provide tools for error suppression and optimizing quantum algorithms. “There are many more software resources for the hackers to dive into, including algorithms that can even analyse the problems themselves,” Smalley says.

Cathy White, a research manager at BT who has mentored a team at all three hackathons, agrees that rapid innovation in hardware and software is now making it possible for the hackers to address real-world problems – which in her case was to find the optimal way to position fault-detecting sensors in optical networks. “I wanted to set a problem for which we could honestly say that our classical algorithms can’t always provide a good approximation,” she explained. “We saw some promising results within the time allowed, and I’m feeling very positive that quantum computers are becoming useful.”

Both White and Smalley could see a significant benefit from the extended format, which gave hackers an extra day to explore the problem and consider different solution pathways. The range of technology providers involved in the event also enabled the teams to test their solutions on different platforms, and to adapt their approach if they ran into a problem. “With the extra time my team was able to use D-Wave’s quantum annealer as well as a gate-model approach, and it was impressive to see the diversity of algorithms and approaches that the students were able to come up with,” White comments. “They also had more scope to explore different aspects of the problem, and to consolidate their results before deciding what they wanted to present.”

One clear outcome from the extended format was more opportunity to benchmark the quantum solutions against their classical counterparts. “The students don’t claim quantum advantage without proper evidence,” adds White. “Every year we see remarkable progress in the technology, but they can help us to see where there are still challenges to be overcome.”

According to Stasja Stanisic from Phasecraft, one of the four-strong judging panel, a robust approach to benchmarking was one of the stand-out factors for the winning team. Mentored by Aioi R&D Lab, the team investigated a risk aggregation problem, which involved modelling dynamic relationships between data such as insurance losses, stock market data and the occurrence of natural disasters. “The winning team took time to really understand the problem, which allowed them to adapt their algorithm to match their use-case scenario,” Stanisic explains. “They also had a thorough and structured approach to benchmarking their results against other possible solutions, which is an important comparison to make.”

The team presenting their results

Teams were judged on various criteria, including the creativity of the solution, its success in addressing the use case, and investigation of scaling and feasibility. The social impact and ethical considerations of their solution was also assessed. Using the NQCC’s Quantum STATES principles for responsible and ethical quantum computing (REQC), which were developed and piloted at the NQCC, the teams, for example, considered the potential impact of their innovation on different stakeholders and the explainability of their solution. They also proposed practical recommendations to maximize societal benefit. While many of their findings were specific to their use cases, one common theme was the need for open and transparent development processes to build trust among the wider community.

“Quantum computing is an emerging technology, and we have the opportunity right at the beginning to create an environment where ethical considerations are discussed and respected,” says Stanisic. “Some of the teams showed some real depth of thought, which was exciting to see, while the diverse use cases from both the public and private sectors allowed them to explore these ethical considerations from different perspectives.”

Also vital for participants was the chance to link with and learn from their peers. “The hackathon is a place where we can build and maintain relationships, whether with the individual hackers or with the technology partners who are also here,” says Smalley. For Hughes, meanwhile, the ability to engage with quantum practitioners has been a game changer. “Being in a room with lots of clever people who are all sparking off each other has opened my eyes to the power of quantum neural networks,” he says. “It’s been phenomenal, and I’m excited to see how we can take this forward at North Wales Police.”

  • To take part in the 2025 Quantum Hackathon – whether as a hacker, an industry mentor or technology provider – please e-mail the NQCC team at nqcchackathon@stfc.ac.uk

The post Quantum hackathon makes new connections appeared first on Physics World.

]]>
Analysis The 2024 UK Quantum Hackathon set new standards for engagement and collaboration https://physicsworld.com/wp-content/uploads/2024/09/frontis-web.png
Rheo-electric measurements to predict battery performance from slurry processing https://physicsworld.com/a/rheo-electric-measurements-to-predict-battery-performance-from-slurry-processing/ Fri, 20 Sep 2024 06:58:33 +0000 https://physicsworld.com/?p=116835 Join the audience for a live webinar on 6 November 2024 sponsored by TA Instruments – Waters in partnership with The Electrochemical Society

The post Rheo-electric measurements to predict battery performance from slurry processing appeared first on Physics World.

]]>

The market for lithium-ion batteries (LIBs) is expected to grow ~30x to almost 9 TWh produced annually in 2040 driven by demand from electric vehicles and grid scale storage. Production of these batteries requires high-yield coating processes using slurries of active material, conductive carbon, and polymer binder applied to metal foil current collectors. To better understand the connections between slurry formulation, coating conditions, and composite electrode performance we apply new Rheo-electric characterization tools to battery slurries. Rheo-electric measurements reveal the differences in carbon black structure in the slurry that go undetected by rheological measurements alone. Rheo-electric results are connected to characterization of coated electrodes in LIBs in order to develop methods to predict the performance of a battery system based on the formulation and coating conditions of the composite electrode slurries.

Jeffrey Richards is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on understanding the rheological and electrical properties of soft materials found in emergent energy technologies.

Jeffrey Lopez is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on using fundamental chemical engineering principles to study energy storage devices and design solutions to enable accelerated adoption of sustainable energy technologies.



The post Rheo-electric measurements to predict battery performance from slurry processing appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 6 November 2024 sponsored by TA Instruments – Waters in partnership with The Electrochemical Society https://physicsworld.com/wp-content/uploads/2024/09/2024-11-06-webinarimage.jpg
Simultaneous structural and chemical characterization with colocalized AFM-Raman https://physicsworld.com/a/simultaneous-structural-and-chemical-characterization-with-colocalized-afm-raman/ Thu, 19 Sep 2024 15:27:06 +0000 https://physicsworld.com/?p=116806 Join the audience for a live webinar on 22 October 2024 sponsored by HORIBA

The post Simultaneous structural and chemical characterization with colocalized AFM-Raman appeared first on Physics World.

]]>

The combination of Atomic Force Microscopy (AFM) and Raman spectroscopy provides deep insights into the complex properties of various materials. While Raman spectroscopy facilitates the chemical characterization of compounds, interfaces and complex matrices, offering crucial insights into molecular structures and compositions, including microscale contaminants and trace materials. AFM provides essential data on topography and mechanical properties, such as surface texture, adhesion, roughness, and stiffness at the nanoscale.

Traditionally, users must rely on multiple instruments to gather such comprehensive analysis. HORIBA’s AFM-Raman system stands out as a uniquely multimodal tool, integrating an automated AFM with a Raman/photoluminescence spectrometer, providing precise pixel-to-pixel correlation between structural and chemical information in a single scan.

This colocalized approach is particularly valuable in applications such as polymer analysis, where both surface morphology and chemical composition are critical; in semiconductor manufacturing, for detecting defects and characterizing materials at the nanoscale; and in life sciences, for studying biological membranes, cells, and tissue samples. Additionally, it’s ideal for battery research, where understanding both the structural and chemical evolution of materials is key to improving performance.

João Lucas Rangel currently serves as the AFM & AFM-Raman global product manager at HORIBA and holds a PhD in biomedical engineering. Specializing in Raman, infrared, and fluorescence spectroscopies, his PhD research was focused on skin dermis biochemistry changes. At HORIBA Brazil, João started in 2012 as molecular spectroscopy consultant, transitioning into a full-time role as an application scientist/sales support across Latin America, expanding his responsibilities, overseeing the applicative sales support, and co-management of the business activities within the region. In 2022, João was invited to join HORIBA France as a correlative microscopy – Raman application specialist, being responsible to globally develop the correlative business, combing HORIBA’s existing technologies with other complementary technologies. More recently, in 2023, João was promoted to the esteemed position of AFM & AFM-Raman global product manager. In this role, João oversees strategic initiatives aiming at the company’s business sustainability and future development, ensuring its continued success and future growth.

The post Simultaneous structural and chemical characterization with colocalized AFM-Raman appeared first on Physics World.

]]>
Join the audience for a live webinar on 22 October 2024 sponsored by HORIBA https://physicsworld.com/wp-content/uploads/2024/09/2024-10-22-webinar-image.jpg
Diagnosing and treating disease: how physicists keep you safe during healthcare procedures https://physicsworld.com/a/diagnosing-and-treating-disease-how-physicists-keep-you-safe-during-healthcare-procedures/ Thu, 19 Sep 2024 14:42:15 +0000 https://physicsworld.com/?p=116888 Two medical physicists talk about the future of treatment and diagnostic technologies

The post Diagnosing and treating disease: how physicists keep you safe during healthcare procedures appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast features two medical physicists working at the heart of the UK’s National Health Service (NHS). They are Mark Knight, who is chief healthcare scientist at the NHS Kent and Medway Integrated Care Board, and Fiammetta Fedele, who is head of non-ionizing radiation at Guy’s and St Thomas NHS Foundation Trust in London.

They explain how medical physicists keep people safe during healthcare procedures – while innovating new technologies and treatments. They also discuss the role that artificial intelligence could play in medical physics and take a look forward to the future of healthcare.

This episode is supported by RaySearch Laboratories.

RaySearch Laboratories unifies industry solutions, empowering healthcare providers to deliver precise and effective radiotherapy treatment. RaySearch products transform scattered technologies into clarity, elevating the radiotherapy industry.

The post Diagnosing and treating disease: how physicists keep you safe during healthcare procedures appeared first on Physics World.

]]>
Podcasts Two medical physicists talk about the future of treatment and diagnostic technologies https://physicsworld.com/wp-content/uploads/2024/09/Mark-knight-Fiammetta-Fedele.jpg newsletter
RadCalc QA: ensuring safe and efficient radiotherapy throughout Australia https://physicsworld.com/a/radcalc-qa-ensuring-safe-and-efficient-radiotherapy-throughout-australia/ Thu, 19 Sep 2024 12:45:15 +0000 https://physicsworld.com/?p=116746 Cancer care provider GenesisCare is using LAP’s RadCalc platform to perform software-based quality assurance of all its radiotherapy treatment plans

The post RadCalc QA: ensuring safe and efficient radiotherapy throughout Australia appeared first on Physics World.

]]>
GenesisCare is the largest private radiation oncology provider in Australia, operating across five states and treating around 30,000 cancer patients each year. At the heart of this organization, ensuring the safety and efficiency of all patient radiotherapy treatments, lies a single server running LAP’s RadCalc quality assurance (QA) software.

RadCalc is a 100% software-based platform designed to streamline daily patient QA. The latest release, version 7.3.2, incorporates advanced 3D algorithms for secondary verification of radiotherapy plans, EPID-based pre-treatment QA and in vivo dosimetry, as well as automated 3D calculation based on treatment log files.

For GenesisCare, RadCalc provides independent secondary verification for 100 to 130 new plans each day, from more than 43 radiation oncology facilities across the country. The use of a single QA platform for all satellite centres helps to ensure that every patient receives the same high standard of care. “With everyone using the same software, we’ve got a single work instruction and we’re all doing things the same way,” says Leon Dunn, chief medical physicist at GenesisCare in Victoria.

“While the individual states operate as individual business units, the physics team operates as one, and the planners operate as one team as well,” adds Peter Mc Loone, GenesisCare’s head of physics for Australia. “We are like one team nationally, so we try to do things the same way. Obviously, it makes sense to make sure everyone’s checking the plans in the same way as well.”

User approved

GenesisCare implemented RadCalc more than 10 years ago, selected in part due to the platform’s impressive reputation amongst its users in Australia. “At that time, RadCalc was well established in radiotherapy and widely used,” explains Dunn. “It didn’t have all the features that it has now, but its basic features met the requirements we needed and it had a pretty solid user base.”

Today, GenesisCare’s physicists employ RadCalc for plan verification of all types of treatment across a wide range of radiotherapy platforms – including Varian and Elekta linacs, Gamma Knife and the Unity MR-linac, as well as superficial treatments and high dose-rate brachytherapy. They also use RadCalc’s plan comparison tool to check that the output from the treatment planning system matches what was imported to the MOSAIQ electronic medical record system.

“Before we had the plan comparison feature, our radiation therapists had to manually check control points in the plan against what was on the machine,” says Mc Loone. “RadCalc checks a wide range of values within the plan. It’s a very quick check that has saved us a lot of time, but also increased the safety aspect. We have certainly picked up errors through its use.”

Keeping treatments safe

The new feature that’s helping to make a big difference, however, is GenesisCare’s recent implementation of RadCalc’s 3D independent recalculation tool. Dunn explains that RadCalc previously performed a 2D comparison between the dose to a single point in the treatment planning system and the calculated dose to that point.

The new module, on the other hand, employs RadCalc’s collapsed-cone convolution algorithm to reconstruct 3D dose on the patient’s entire CT data set. Enabled by the introduction of graphics processing units, the algorithm performs a completely independent 3D recalculation of the treatment plan on the patient’s data.  “We’ve gone from a single point to tens of thousands of points,” notes Dunn.

Importantly, this 3D recalculation can discover any errors within a treatment plan before it gets to the point at which it needs to be measured. “Our priority is for every patient to have that second check done, thereby catching anything that is wrong with the treatment plan, hopefully before it is seen by the doctor. So we can fix things before they could become an issue,” Dunn says, pointing out that in the first couple of months of using this tool, it highlighted potentially suboptimal treatment plans to be improved.

Peter Mc Loone

In contrast, previous measurement-based checks had to be performed at the end of the entire planning process, after everyone had approved the plan and it had been exported to the treatment system. “Finding an error at that point puts a lot of pressure on the team to redo the plan and have everything reapproved,” Mc Loone explains. “By removing that stress and allowing checks to happen earlier in the piece, it makes the overall process safer and more efficient.”

Dunn notes that if the second check shows a problem with the plan, the plan can still be sent for measurements if needed, to confirm the RadCalc findings.

Increasing efficiency

As well as improving safety, the ability to detect errors early on in the planning process speeds up the entire treatment pathway. Operational efficiency is additionally helped by RadCalc’s high level of automation.

Once a treatment plan is created, the planning staff need to export it to RadCalc, with a single click. RadCalc then takes care of everything else, importing the entire data set, sending it to the server for recalculation and then presenting the results. “We don’t have to touch any of the processes until we get the quality checklist out, and that’s a real game changer for us,” says Dunn.

“We have one RadCalc system, that can handle five different states and several different treatment planning systems [Varian’s Eclipse and Elekta’s Monaco and GammaPlan],” notes Mc Loone. “We can have 130 different plans coming in, and RadCalc will filter them correctly and apply the right beam models using that automation that LAP has built in.”

Because RadCalc performs 100% software-based checks, it doesn’t require access to the treatment machine to run the QA (which usually means waiting until the day’s clinical session has finished). “We’re no longer waiting around to perform measurements on the treatment machine,” Dunn explains. “It’s all happening while the patients are being treated during the normal course of the day. That automation process is an important time saver for us.”

This shift from measurement- to software-based QA also has a huge impact on the radiation therapists. As they were already using the machines to treat patients, the therapists were tasked with delivering most of the QA cases – at the end of the day or in between treatment sessions – and informing the physicists of any failures.

“Since we’ve introduced RadCalc, they essentially get all that time back and can focus on doing what they do best, treating patients and making sure it’s all done safely,” says Dunn. “Taking that burden away from them is a great additional bonus.”

Looking to the future, GenesisCare next plans to implement RadCalc’s log file analysis feature, which will enable the team to monitor and verify the performance of the radiotherapy machines. Essentially, the log files generated after each treatment are brought back into RadCalc, which then verifies that what the machine delivered matched the original treatment plan.

“Because we have so many plans going through, delivered by many different accelerators, we can start to build a picture of machine performance,” says Dunn. “In the future, I personally want to look at the data that we collect through RadCalc. Because everything’s coming through that one system, we’ve got a real opportunity to examine safety and quality at a system level, from treatment planning system through to patient treatment.”

The post RadCalc QA: ensuring safe and efficient radiotherapy throughout Australia appeared first on Physics World.

]]>
Analysis Cancer care provider GenesisCare is using LAP’s RadCalc platform to perform software-based quality assurance of all its radiotherapy treatment plans https://physicsworld.com/wp-content/uploads/2024/09/RadCalc-Physics-World.jpg newsletter
The free-to-read Physics World Big Science Briefing 2024 is out now https://physicsworld.com/a/the-free-to-read-physics-world-big-science-briefing-2024-is-out-now/ Thu, 19 Sep 2024 12:00:09 +0000 https://physicsworld.com/?p=116843 Find out more about designs for a muon collider and why gender diversity in big science needs recognition

The post The free-to-read <em>Physics World Big Science Briefing</em> 2024 is out now appeared first on Physics World.

]]>
Over the past decades, “big science” has become bigger than ever be it planning larger particle colliders, fusion tokamaks or space observatories. That development is reflected in the growth of the Big Science Business Forum (BSBF), which has been going from strength to strength following its first meeting in 2018 in Copenhagen.

This year, more than 1000 delegates from 500 organizations and 30 countries will descend on Trieste from 1 to 4 October for BSBF 2024. The meeting will see European businesses and organizations such as the European Southern Observatory, the CERN particle-physics laboratory and Fusion 4 Energy come together to discuss the latest developments and business trends in big science.

A key component of the event – as it was at the previous BSBF in Granada, Spain, in 2022 – is the Women in Big Science group, who will be giving a plenary session about initiatives to boost and help women in big science.

In this year’s Physics World Big Science Briefing, Elizabeth Pollitzer – co-founder and director of Portia, which seeks to improve gender equality in science, technology, engineering and mathematics.

She explains why we need gender equality in big science and what measures must be taken to tackle the gender imbalance among staff and users of large research infrastructures.

One prime example of big science is particle physics. Some 70 years since the founding of CERN and a decade following the discovery of the Higgs boson at the lab’s Large Hadron Collider (LHC) in 2012, particle physics stands at a crossroads. While the consensus is that a “Higgs factory” should come next after the LHC, there is disagreement over what kind of machine it should be – a large circular collider some 91 km in circumference or a linear machine just a few kilometres long.

As the wrangling goes on, other proposals are also being mooted such as a muon collider. Despite needing new technologies, a muon collider has the advantage that it would only require a circular collider in a tunnel roughly the size of the LHC.

Another huge multinational project is the ITER fusion tokamak currently under construction in Cadarache, France. Hit by cost hikes and delays for decades, there was more bad news earlier this year when ITER said the tokamak will now not fire up until 2035. ”Full power” mode with deuterium and tritium won’t happen until 2039 some 50 years since the facility was first mooted.

Backers hope that ITER will lay the way towards fusion power plants delivering electricity to the grid, but huge technical challenges lie in store. After all, those reactors will have to breed their own tritium so they become fuel independent, as John Evans explains.

Big science also involves dedicated user facilities. In this briefing we talk to Gianluigi Botton from the Diamond Light Source in the UK and Mike Witherell from the Lawrence Berkeley National Laboratory on managing such large scale research infrastructures and their plans for the future.

We hope you enjoy the briefing and let us know your feedback on the issue.

The post The free-to-read <em>Physics World Big Science Briefing</em> 2024 is out now appeared first on Physics World.

]]>
Blog Find out more about designs for a muon collider and why gender diversity in big science needs recognition https://physicsworld.com/wp-content/uploads/2019/09/cern-cms-crop.jpg 1
Vortex cannon generates toroidal electromagnetic pulses https://physicsworld.com/a/vortex-cannon-generates-toroidal-electromagnetic-pulses/ Thu, 19 Sep 2024 09:34:09 +0000 https://physicsworld.com/?p=116855 Electromagnetic vortex pulses could be employed for information encoding, high-capacity communication and more

The post Vortex cannon generates toroidal electromagnetic pulses appeared first on Physics World.

]]>
electromagnetic cannons emit electromagnetic vortex pulses thanks to coaxial horn antennas

Toroidal electromagnetic pulses can be generated using a device known as a horn microwave antenna. This electromagnetic “vortex cannon” produces skyrmion topological structures that might be employed for information encoding or for probing the dynamics of light–matter interactions, according to its developers in China, Singapore and the UK.

Examples of toroidal or doughnut-like topology abound in physics – in objects such as Mobius strips and Klein bottles, for example. It is also seen in simpler structures like smoke rings in air and vortex rings in water, as well as in nuclear currents. Until now, however, no one had succeeded in directly generating this topology in electromagnetic waves.

A rotating electromagnetic wave structure

In the new work, a team led by Ren Wang from the University of Electronic Science and Technology of China, Yijie Shen from Nanyang Technological University in Singapore and colleagues from the University of Southampton in the UK employed wideband, radially polarized, conical coaxial horn antennas with an operating frequency range of 1.3–10 GHz. They used these antennas to create a rotating electromagnetic wave structure with a frequency in the microwave range.

The antenna comprises inner and outer metal conductors, with 3D-printed conical and flat-shaped dielectric supports at the bottom and top of the coaxial horn, respectively

“When the antenna emits, it generates an instantaneous voltage difference that forms the vortex rings,” explains Shen. “These rings are stable over time – even in environments with lots of disturbances – and maintain their shape and energy over long distances.”

Complex features such as skyrmions

The conical coaxial horn antenna generates an electromagnetic field in free space that rotates around the propagation direction of the wave structure. The researchers experimentally mapped the toroidal electromagnetic pulses at propagation distances of 5, 50 and 100 cm from the horn aperture, using a planar microwave anechoic chamber (a shielded room covered with electromagnetic absorbers) to measure the spatial electromagnetic fields of the antenna, using a scanning frame to move the antenna to the desired measurement area. They then connected a vector network analyser to the transmitting and receiving antennas to obtain the magnitude and phase characteristics of the electromagnetic field at different positions.

The researchers found that the toroidal pulses contained complex features such as skyrmions. These are made up of numerous electric field vectors and can be thought of as two-dimensional whirls (or “spin textures”). The pulses also evolved over time to more closely resemble canonical Hellwarth–Nouchi toroidal pulses. These structures, first theoretically identified by the two physicists they are named after, represent a radically different, non-transverse type of electromagnetic pulse with a toroidal topology. These pulses, which are propagating counterparts of localized toroidal dipole excitations in matter, exhibit unique electromagnetic wave properties, explain Shen and colleagues.

A wide range of applications

The researchers say that they got the idea for their new work by observing how smoke rings are generated from an air cannon. They decided to undertake the study because toroidal pulses in the microwave range have applications in a wide range of areas, including cell phone technology, telecommunications and global positioning. “Understanding both the propagation dynamics and characterizing the topological structure of these pulses is crucial for developing these applications,” says Shen.

The main difficulty faced in these experiments was generating the pulses in the microwave part of the electromagnetic spectrum. The researchers attempted to do this by adapting existing optical metasurface methodologies, but failed because a large metasurface aperture of several metres was required, which was simply too impractical to fabricate. They overcame the problem by making use of a microwave horn emitter that’s more straightforward to create.

Looking forward, the researchers now plan to focus on two main areas. The first is to develop communication, sensing, detection and metrology systems based on toroidal pulses, aiming to overcome the limitations of existing wireless applications. Secondly, they hope to generate higher-order toroidal pulses, also known as supertoroidal pulses.

“These possess unique characteristics such as propagation invariance, longitudinal polarization, electromagnetic vortex streets (organized patterns of swirling vortices) and higher-order skyrmion topologies,” Shen tells Physics World. “The supertoroidal pulses have the potential to drive the development of ground-breaking applications across a range of fields, including defence systems or space exploration.”

The study is detailed in Applied Physics Reviews.

The post Vortex cannon generates toroidal electromagnetic pulses appeared first on Physics World.

]]>
Research update Electromagnetic vortex pulses could be employed for information encoding, high-capacity communication and more https://physicsworld.com/wp-content/uploads/2024/09/19-09-24-electromagnetic-cannon-featured.jpg newsletter1
A comprehensive method for assembly and design optimization of single-layer pouch cells https://physicsworld.com/a/a-comprehensive-method-for-assembly-and-design-optimization-of-single-layer-pouch-cells/ Wed, 18 Sep 2024 14:08:38 +0000 https://physicsworld.com/?p=114420 Join the audience for a live webinar on 23 October 2024 sponsored by BioLogic, EL-Cell and TA Instruments - Waters, in partnership with The Electrochemical Society

The post A comprehensive method for assembly and design optimization of single-layer pouch cells appeared first on Physics World.

]]>

For academic researchers, the cell format for testing lithium-ion batteries is often overlooked. However, choices in cell format and their design can affect cell performance more than one may expect. Coin cells that utilize either a lithium metal or greatly oversized graphite negative electrode are common but can provide unrealistic testing results when compared to commercial pouch-type cells. Instead, single-layer pouch cells provide a more similar format to those used in industry while not requiring large amounts of active material. Moreover, their assembly process allows for better positive/negative electrode alignment, allowing for assembly of single-layer pouch cells without negative electrode overhang. This talk presents a comparison between coin, single-layer pouch, and stacked pouch cells, and shows that single-layer pouch cells without negative electrode overhang perform best. Additionally, a careful study of the detrimental effects of excess electrode material is shown. The single-layer pouch cell format can also be used to measure pressure and volume in situ, something that is not possible in a coin cell. Last, a guide to assembling reproducible single-layer pouch cells without negative electrode overhang is presented.

An interactive Q&A session follows the presentation.

Matthew Garayt

Matthew D L Garayt is a PhD candidate in the Jeff Dahn, Michael Metzger, and Chongyin Yang Research Groups at Dalhousie University. His work focuses on materials for lithium- and sodium-ion batteries, with a focus on increased energy density and lifetime. Before this, he worked at E-One Moli Energy, the first rechargeable lithium battery company in the world, where he worked on high-power lithium-ion batteries, and completed a summer research term in the Obrovac Research Group, also at Dalhousie. He received a BSc (Hons) in applied physics from Simon Fraser University.

The post A comprehensive method for assembly and design optimization of single-layer pouch cells appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 23 October 2024 sponsored by BioLogic, EL-Cell and TA Instruments - Waters, in partnership with The Electrochemical Society https://physicsworld.com/wp-content/uploads/2024/05/2024-10-23ECSimage.jpg
Gallium-doped bioactive glass kills 99% of bone cancer cells https://physicsworld.com/a/gallium-doped-bioactive-glass-kills-99-of-bone-cancer-cells/ Wed, 18 Sep 2024 14:00:07 +0000 https://physicsworld.com/?p=116829 New therapy kills cancerous cells while stimulating growth of new healthy bone

The post Gallium-doped bioactive glass kills 99% of bone cancer cells appeared first on Physics World.

]]>
Osteosarcoma, the most common type of bone tumour, is a highly malignant cancer that mainly affects children and young adults. Patients are typically treated with an aggressive combination of resection and chemotherapy, but survival rates have not improved significantly since the 1970s. With alternative therapies urgently needed, a research team at Aston University has developed a gallium-doped bioactive glass that selectively kills over 99% of bone cancer cells.

The main objective of osteosarcoma treatment is to destroy the tumour and prevent recurrence. But over half of long-term survivors are left with bone mass deficits that can lead to fractures, making bone restoration another important goal. Bioactive glasses are already used to repair and regenerate bone – they bond with bone tissue and induce bone formation by releasing ions such as calcium, phosphorus and silicon. But they can also be designed to release therapeutic ions.

Team leader Richard Martin and colleagues propose that bioactive glasses doped with  gallium ions could address both tasks – helping to prevent cancer recurrence and lowering the  risk of fracture. They designed a novel biomaterial that provides targeted drug delivery to the tumour site, while also introducing a regenerative scaffold to stimulate the new bone growth.

“Gallium is a toxic ion that has been widely studied and is known to be effective for cancer therapy. Cancer cells tend to be more metabolically active and therefore uptake more nutrients and minerals to grow – and this includes the toxic gallium ions,” Martin explains. “Gallium is also known to inhibit bone resorption, which is important as bone cancer patients tend to have lower bone density and are more prone to fractures.”

Glass design

Starting with a silicate-based bioactive glass, the researchers fabricated six glasses doped with between 0 and 5 mol% of gallium oxide (Ga2O3). They then ground the glasses into powders with a particle size between 40 and 63 µm.

Martin notes that gallium is a good choice for incorporating into the glass, as it is effective in a variety of simple molecular forms. “Complex organic molecules would not survive the high processing temperatures required to make bioactive glasses, whereas gallium oxide can be incorporated relatively easily,” he says.

To test the cytotoxic effects of the bioactive glasses on cancer cells, the team created “conditioned media”, by incubating the gallium-doped glass particles in cell culture media at concentrations of 10 or 20 mg/mL.  After 24 h, the particles were filtered out to leave various levels of gallium ions in the media.

The researchers then exposed osteosarcoma cells, as well as normal osteoblasts as controls, to conditioned media from the six gallium-doped powders. Cell viability assays revealed significant cytotoxicity in cancer cells exposed to the conditioned media, with a reduction in cell viability correlating with gallium concentration.

After 10 days, cancer cells exposed to media conditioned with 10 mg/mL of 4 and 5% gallium-doped glass showed decreased cell viability, to roughly 60% and less than 10%, respectively. The 20 mg/mL of 4% and 5% gallium-doped glass were the most toxic to the cancer cells, causing 60% and more than 99% cell death, respectively, after 10 days.

Exposure to gallium-free bioglass did not significantly impact cell viability – confirming that the toxicity is due to gallium and not the other components of the glass (calcium, sodium, phosphorus and silicate ions).

While the glasses preferentially killed osteosarcoma cells compared with normal osteoblasts, some cytotoxic effects were also seen in the control cells. Martin believes that this slight toxicity to normal healthy cells is within safe limits, noting that the localized nature of the treatment should significantly reduce side effects compared with orally administered gallium.

“Further experiments are needed to confirm the safety of these materials,” he says, “but our initial studies show that these gallium-doped bioactive glasses are not toxic in vivo and have no effects on major organs such as the liver or kidneys.”

The researchers also performed live/dead assays on the osteosarcoma and control cells. The results confirmed the highly cytotoxic effect of gallium-doped bioactive glass on the cancer cells with relatively minor toxicity towards normal cells. They also found that exposure to the gallium-doped glass significantly reduced cancer cell proliferation and migration.

Bone regeneration

To test whether the bioactive glasses could also help to heal bone, the team exposed glass samples to simulated body fluid for seven days. Under these physiological conditions, the glasses gradually released calcium and phosphorous ions.

FTIR and energy dispersive X-ray spectroscopy revealed that these ions precipitated onto the glass surface to form an amorphous calcium phosphate/hydroxyapatite layer – indicating the initial stages of bone regeneration. For clinical use, the glass particles could be mixed into a paste and injected into the void created during tumour surgery.

“This bioactivity will help generate new bone formation and prevent bone mass deficits and potential future fractures,” Martin and colleagues conclude. “The results when combined strongly suggest that gallium-doped bioactive glasses have great potential for osteosarcoma-related bone grafting applications.”

Next, the team plans to test the materials on a wide range of bone cancers to ensure the treatment is effective against different cancer types, as well as optimizing the dosage and delivery before undertaking preclinical tests.

The researchers report their findings in Biomedical Materials.

The post Gallium-doped bioactive glass kills 99% of bone cancer cells appeared first on Physics World.

]]>
Research update New therapy kills cancerous cells while stimulating growth of new healthy bone https://physicsworld.com/wp-content/uploads/2024/09/18-09-24-gallium-Richard-Martin.jpg newsletter1
Adaptive deep brain stimulation reduces Parkinson’s disease symptoms https://physicsworld.com/a/adaptive-deep-brain-stimulation-reduces-parkinsons-disease-symptoms/ Wed, 18 Sep 2024 09:10:46 +0000 https://physicsworld.com/?p=116800 An intelligent self-adjusting brain pacemaker could improve the quality-of-life for those living with Parkinson’s disease

The post Adaptive deep brain stimulation reduces Parkinson’s disease symptoms appeared first on Physics World.

]]>
Deep brain stimulation (DBS) is an established treatment for patients with Parkinson’s disease who experience disabling tremors and slowness of movements. But because the therapy is delivered with constant stimulation parameters – which are unresponsive to a patient’s activities or variations in symptom severity throughout the day – it can cause breakthrough symptoms and unwanted side effects.

In their latest Parkinson’s disease initiative, researchers led by Philip Starr from the UCSF Weill Institute for Neurosciences have developed an adaptive DBS (aDBS) technique that may offer a radical improvement. In a feasibility study with four patients, they demonstrated that this intelligent “brain pacemaker” can reduce bothersome side effects by 50%.

The self-adjusting aDBS, described in Nature Medicine, monitors a patient’s brain activity in real time and adjusts the level of stimulation to curtail symptoms as they arise. Generating calibrated pulses of electricity, the intelligent aDBS pacemaker provides less stimulation when Parkinson’s medication is active, to ward off excessive movements, and increases stimulation to prevent slowness and stiffness as the drugs wear off.

Starr and colleagues conducted a blinded, randomized feasibility trial to identify neural biomarkers of motor signs during active stimulation, and to compare the effects of aDBS with optimized constant DBS (cDBS) during normal, unrestricted daily life.

The team recruited four male patients with Parkinson’s disease, ranging in age from 47 to 68 years, for the study. Although all participants had implanted DBS devices, they were still experiencing symptom fluctuations that were not resolved by either medication or cDBS therapy. They were asked to identify the most bothersome residual symptom that they experienced.

To perform aDBS, the researchers developed an individualized data-driven pipeline for each participant, which turns the recorded subthalamic or cortical field potentials into personalized algorithms that auto-adjust the stimulation amplitudes to alleviate residual motor fluctuations. They used both in-clinic and at-home neural recordings to provide the data.

“The at-home data streaming step was important to ensure that biomarkers identified in idealized, investigator-controlled conditions in the clinic could function in naturalistic settings,” the researchers write.

The four participants received aDBS alongside their existing DBS therapy. The team compared the treatments by alternating between cDBS and aDBS every two to seven days, with a cumulative period of one month per condition.

The researchers monitored motor symptoms using wearable devices plus symptom diaries completed daily by the participants. They evaluated the most bothersome symptoms, in most cases bradykinesia (slowness of movements), as well as stimulation-associated side effects such as dyskinesia (involuntary movements). To control for other unwanted side effects, participants also rated other common motor symptoms, their quality of sleep, and non-motor symptoms such as depression, anxiety, apathy and impulsivity.

The study revealed that aDBS improved each participant’s most bothersome symptom by roughly 50%. Three patients also reported improved quality-of-life using aDBS. This change was so obvious to these three participants that, even though they did not know which treatment was being delivered at any time, they could often correctly guess when they were receiving aDBS.

The researchers note that the study establishes the methodology for performing future trials in larger groups of males and females with Parkinson’s disease.

“There are three key pathways for future research,” lead author Carina Oehrn tells Physics World. “First, simplifying and automating the setup of these systems is essential for broader clinical implementation. Future work by Starr and Simon Little at UCSF, and Lauren Hammer (now at the Hospital of the University of Pennsylvania) will focus on automating this process to increase access to the technology. From a practicality standpoint, we think it necessary to develop an AI-driven smart device that can identify and auto-set treatment settings with a clinician-activated button.”

“Second, long-term monitoring for safety and sustained effectiveness is crucial,” Oehrn added. “Third, we need to expand these approaches to address non-motor symptoms in Parkinson’s disease, where treatment options are limited. I am studying aDBS for memory and mood in Parkinson’s at the University of California-Davis. Little is investigating aDBS for sleep disturbances and motivation.”

The post Adaptive deep brain stimulation reduces Parkinson’s disease symptoms appeared first on Physics World.

]]>
Research update An intelligent self-adjusting brain pacemaker could improve the quality-of-life for those living with Parkinson’s disease https://physicsworld.com/wp-content/uploads/2024/09/18-09-24-UCSF-DBS-Parkinsons-06.jpg
Dark-matter decay could have given ancient supermassive black holes a boost https://physicsworld.com/a/dark-matter-decay-could-have-given-ancient-supermassive-black-holes-a-boost/ Tue, 17 Sep 2024 15:19:39 +0000 https://physicsworld.com/?p=116799 Calculations suggest photons may have warmed gas clouds

The post Dark-matter decay could have given ancient supermassive black holes a boost appeared first on Physics World.

]]>
The decay of dark matter could have played a crucial role in triggering the formation of supermassive black holes (SMBHs) in the early universe, according to a trio of astronomers in the US. Using a combination of gas-cloud simulations and theoretical dark matter calculations, Yifan Lu and colleagues at the University of California, Los Angeles, uncovered promising evidence that the decay of dark matter may have provided the radiation necessary to prevent primordial gas clouds from fragmenting as they collapsed.

SMBHs are thought to reside at the centres of most large galaxies, and can be hundreds of thousands to billions of times more massive than the Sun. For decades, astronomers puzzled over how such immense objects could have formed, and the mystery has deepened with recent observations by the James Webb Space Telescope (JWST).

Since 2023, JWST has detected SMBHs that existed less than one billion years after the birth of the universe. This is far too early to be the result of conventional stellar evolution, whereby smaller black holes coalesce to create a SMBH.

Fragmentation problem

An alternative explanation is that vast primordial gas clouds in the early universe collapsed directly into SMBHs. However, as Lu explains, this theory challenges our understanding of how matter behaves. “Detailed calculations show that, in the absence of any unusual radiation, the largest gas clouds tend to fragment and form a myriad of small halos, not a single supermassive black hole,” he says. “This is due to the formation of molecular hydrogen, which cools the rest of the gas by radiating away thermal energy.”

For SMBHs to form under these conditions, molecular hydrogen would have needed to be somehow suppressed, which would require an additional source of radiation from within these ancient clouds. Recent studies have proposed that this extra energy could have come from hypothetical dark-matter particles decaying into photons.

“This additional radiation could cause the dissociation of molecular hydrogen, preventing fragmentation of large gas clouds into smaller pieces,” Lu explains. “In this case, gravity forces the entire large cloud to collapse as a whole into a [SMBH].”

In several recent studies, researchers have used simulations and theoretical estimates to investigate this possibility. So far, however, most studies have either focused on the mechanics of collapsing gas clouds or on the emissions produced by decaying dark matter, with little overlap between the two.

Extra ingredient needed

“Computer simulations of clouds of gas that could directly collapse to black holes have been studied extensively by groups farther on the astrophysics side of things, and they had examined how additional sources of radiation are a necessary ingredient,” explains Lu’s colleague Zachary Picker.

“Simultaneously, people from the dark matter side had performed some theoretical estimations and found that it seemed unlikely that dark matter could be the source of this additional radiation,” adds Picker.

In their study, Lu, Picker, and Alexander Kusenko sought to bridge this gap by combining both approaches: simulating the collapse of a gas cloud when subjected to radiation produced by the decay of several different candidate dark-matter particles. As they predicted, some of these particles could indeed provide the missing radiation needed to dissociate molecular hydrogen, allowing the entire cloud to collapse into a single SMBH.

However, dark matter is a hypothetical substance that has never been detected directly. As a result, the trio acknowledges that there is currently no reliable way to verify their findings experimentally. For now, this means that their model will simply join a growing list of theories that aim to explain the formation of SMBHs. But if the situation changes in the future, the researchers hope their model could represent a significant step forward in understanding the early universe’s evolution.

“One day, hopefully in my lifetime, we’ll find out what the dark matter is, and then suddenly all of the papers written about that particular type will magically become ‘correct’,” Picker says. “All we can do until then is to keep trying new ideas and hope they uncover something interesting.”

The research is described in Physical Review Letters.

The post Dark-matter decay could have given ancient supermassive black holes a boost appeared first on Physics World.

]]>
Research update Calculations suggest photons may have warmed gas clouds https://physicsworld.com/wp-content/uploads/2024/09/17-9-24-ancient-SMBH.jpg newsletter1
New superconductor has record breaking current density https://physicsworld.com/a/new-superconductor-has-record-breaking-current-density/ Tue, 17 Sep 2024 10:28:06 +0000 https://physicsworld.com/?p=116754 Rare-earth barium copper oxide structure also has the highest pinning force ever reported

The post New superconductor has record breaking current density appeared first on Physics World.

]]>
A superconducting wire segment based on rare-earth barium copper oxide (REBCO) is the highest performing yet in terms of current density, carrying 190 MA/cm2 in the absence of any external magnetic field at a temperature of 4.2 K. At warmer temperatures of 20 K (which is the proposed application temperature for magnets used in commercial nuclear fusion reactors), the wires can still carry over 150 MA/cm2. These figures mean that the wire, despite being only 0.2 micron thick, can carry a current comparable to that of commercial superconducting wires that are almost 10 times thicker, according to its developers at the University at Buffalo in the US.

High-temperature superconducting (HTS) wires could be employed in a host of applications, including energy generation, storage and transmission, transportation, and in the defence and medical sectors. They might also be used in commercial nuclear fusion, offering the possibility of limitless clean energy. Indeed, if successful, this niche application could help address the world’s energy supply issues, says Amit Goyal of the University at Buffalo’s School of Engineering and Applied Science, who co-led this new study.

Record-breaking critical current density and pinning force

Before such large-scale applications see the light of day, however, the performance of HTS wires must be improved – and their cost reduced. Goyal and colleagues’ new HTS wire has the highest values of critical current density reported to date. This is particularly true at lower operating temperatures ranging from 4.2–30 K, which is of interest for the fusion application. While still extremely cold, these are much higher than the absolute zero temperatures that traditional superconductors function at, says Goyal.

And that is not all, the wires also have the highest pinning force (that is, the ability to hold magnetic vortices) ever reported for such wires: around 6.4 TN/m3 per cubic metre at 4.2 K and about 4.2 TN/m3 at 20 K, both under a 7 T applied magnetic field.

“Prior to this work, we did not know if such levels of critical current density and pinning were possible to achieve,” says Goyal.

The researchers made their wire using a technique called pulsed laser deposition. Here, a laser beam impinges on a target material and ablates material that is deposited as a film on the substrate, explains Goyal. “This technique is employed by a majority of HTS wire manufacturers. In our experiment, the high critical current density was made possible thanks to a combination of pinning effects from rare-earth doping, oxygen-point defects and insulating barium zirconate nanocolumns as well as optimization of deposition conditions.”

This is a very exciting time for the HTS field, he tells Physics World. “We have a very important niche large-scale application – commercial nuclear fusion. Indeed, one company, Commonwealth Fusion, has invested $1.8bn in series B funding. And within the last 5 years, almost 20 new companies have been founded around the world to commercialize this fusion technology.”

Goyal adds that his group’s work is just the beginning and that “significant performance enhancements are still possible”. “If HTS wire manufacturers work on optimizing the conditions under which the wires are deposited, they should be able to achieve a much higher critical current density, which will result in much better price/performance metric for the wires and enable applications. Not just in fusion, but all other large-scale applications as well.”

The researchers say they now want to further enhance the critical current density and pinning force of their 0.2 micron-thick wires. “We also want to demonstrate thicker films that can carry much higher current,” says Goyal.

They describe their HTS wires in Nature Communications.

The post New superconductor has record breaking current density appeared first on Physics World.

]]>
Research update Rare-earth barium copper oxide structure also has the highest pinning force ever reported https://physicsworld.com/wp-content/uploads/2024/09/HTS-wire.jpg
Magnetically controlled prosthetic hand restores fine motion control https://physicsworld.com/a/magnetically-controlled-prosthetic-hand-restores-fine-motion-control/ Mon, 16 Sep 2024 15:30:27 +0000 https://physicsworld.com/?p=116791 The first user of a myokinetic prosthesis was able to perform everyday actions such as pouring water into a glass, opening a jar, tying shoelaces and grasping fragile objects

The post Magnetically controlled prosthetic hand restores fine motion control appeared first on Physics World.

]]>
A magnetically controlled prosthetic hand, tested for the first time in a participant with an amputated lower arm, provided fine control of hand motion and enabled the user to perform everyday actions and grasp fragile objects. The robotic prosthetic, developed by a team at Scuola Superiore Sant’Anna in Pisa, uses tiny implanted magnets to predict and carry out intended movements.

Losing a hand can severely affect a person’s ability to perform everyday work and social activities, and many researchers are investigating ways to restore lost motor function via prosthetics. Most available or proposed strategies rely on deciphering electrical signals from residual nerves and muscles to control bionic limbs. But this myoelectric approach cannot reproduce the dexterous movements of a human hand.

Instead, Christian Cipriani and colleagues developed an alternative technique that exploits the physical displacement of skeletal muscles to decode the user’s motor intentions. The new myokinetic interface uses permanent magnets implanted into the residual muscles of the user’s amputated arm to accurately control finger movements of a robotic hand.

“Standard myoelectric prostheses collect non-selective signals from the muscle surface and, due to that low selectivity, typically support only two movements,” explains first author Marta Gherardini. “In contrast, myokinetic control enables simultaneous and selective targeting of multiple muscles, significantly increasing the number of control sources and, consequently, the number of recognizable movements.”

First-in-human test

The first patient to test the new prosthesis was a 34-year-old named Daniel, who had recently lost his left hand and had started to use a myoelectric prosthesis. The team selected him as a suitable candidate because his amputation was recent and blunt, he could still feel the lost hand and the residual muscles in his arm moved in response to his intentions.

For the study, the team implanted six cylindrical (2 mm radius and height) neodymium magnets coated with a biocompatible shell into three muscles in Daniel’s residual forearm. In a minimally invasive procedure, the surgeon used plastic instruments to manipulate the magnets into the tip of the target muscles and align their magnetic fields, verifying their placement using ultrasound.

Daniel also wore a customized carbon fibre prosthetic arm containing all of the electronics needed to track the magnets’ locations in space. When he activates the residual muscles in his arm, the implanted magnets move in response to the muscle contractions. A grid of 140 magnetic field sensors in the prosthesis detect the position and orientation of these magnets and transmit the data to an embedded computing unit. Finally, a pattern recognition algorithm translates the movements into control signals for a Mia-Hand robotic hand.

Gherardini notes that the pattern recognition algorithm rapidly learnt to control the hand based on Daniel’s intended movements. “Training the algorithm took a few minutes, and it was immediately able to correctly recognize the movements,” she says.

In addition to the controlled hand motion arising from intended grasping, the team found that elbow movement activated other forearm muscles. Tissue near the elbow was also compressed by the prosthetic socket during elbow flexion, which caused unintended movement of nearby magnets. “We addressed this issue by estimating the elbow movement through the displacement of these magnets, and adjusting the position of the other magnets accordingly,” says Gherardini.

Robotic prosthesis user grasps a fragile plastic cup

During the six-week study, the team performed a series of functional tests commonly used to assess the dexterity of upper limb prostheses. Daniel successfully completed these tests, with comparable performance to that achieved using a traditional myoelectric prosthetic (in tests performed before the implantation surgery).

Importantly, he was able to control finger movements well enough to perform a wide range of everyday activities – such as unscrewing a water bottle cap, cutting with a knife, closing a zip, tying shoelaces and removing pills from a blister pack. He could also control the grasp force to manipulate fragile objects such as an egg and a plastic cup.

The researchers report that the myokinetic interface worked even better than they expected, with the results highlighting its potential to restore natural motor control in people who have lost limbs. “This system allowed me to recover lost sensations and emotions: it feels like I’m moving my own hand,” says Daniel in a press statement.

At the end of the six weeks, the team removed the magnets. Asides for low-grade inflammation around one magnet that had lost its protective shell, all of the surrounding tissue was healthy. “We are currently working towards a long-term solution by developing a magnet coating that ensures long-term biocompatibility, allowing users to eventually use this system at home,” Gherardini tells Physics World.

She adds that the team is planning to perform another test of the myokinetic prosthesis within the next two years.

The myokinetic prosthesis is described in Science Robotics.

The post Magnetically controlled prosthetic hand restores fine motion control appeared first on Physics World.

]]>
Research update The first user of a myokinetic prosthesis was able to perform everyday actions such as pouring water into a glass, opening a jar, tying shoelaces and grasping fragile objects https://physicsworld.com/wp-content/uploads/2024/09/16-09-24-prosthetic-hand.jpg newsletter1
NASA suffering from ageing infrastructure and inefficient management practices, finds report https://physicsworld.com/a/nasa-suffering-from-ageing-infrastructure-and-inefficient-management-practices-finds-report/ Mon, 16 Sep 2024 13:34:15 +0000 https://physicsworld.com/?p=116785 NASA has been warned that it may need to sacrifice new missions in order to rebalance the space agency’s priorities and achieve its long-term objectives

The post NASA suffering from ageing infrastructure and inefficient management practices, finds report appeared first on Physics World.

]]>
NASA has been warned that it may need to sacrifice new missions in order to rebalance the space agency’s priorities and achieve its long-term objectives. That is the conclusion of a new report – NASA at a Crossroads: Maintaining Workforce, Infrastructure, and Technology Preeminence in the Coming Decades – that finds a space agency battling on many fronts including ageing infrastructure, China’s growing presence in space, and issues recruiting staff.

The report was requested by Congress and published by the National Academies of Sciences, Engineering, and Medicine. It was written by a 13-member committee, which included representatives from industry, academia and government, and was chaired by Norman Augustine, former chief executive of Lockheed Martin. Members visited all nine NASA centres and talked to about 400 employees to compile the report.

While the panel say that NASA had “motivate[ed] many of the nation’s youth to pursue careers in science and technology” and “been a source of inspiration and pride to all Americans”, they highlight a variety of problems at the agency. Those include out-of-date infrastructure, a pressure to prioritize short-term objectives, budget mismatches, inefficient management practices, and an unbalanced reliance on commercial partners. Yet according to Augustine, the agency’s main problem is “the more mundane tendency to focus on near-term accomplishments at the expense of long-term viability”.

As well as external challenges such as China’s growing role in space, the committee discovered that many were homegrown. They found that 83% of NASA’s facilities are past their design lifetimes. For example, the capacity of the Deep Space Network, which provides critical communications support for uncrewed missions, “is inadequate” to support future craft and even current missions such as the Artemis Moon programme “without disrupting other projects”.

There is also competition from private space firms in both technology development and recruitment. According to the report, NASA has strict hiring rules and salaries it can offer. It takes 81 days, on average, from the initial interview to an offer of employment. During that period, the subject will probably receive offers from private firms, not only in the space industry but also in the “digital world”, which offer higher salaries.

In addition, Augustine notes, the agency is giving its engineers less opportunity “to get their hands dirty” by carrying out their own research. Instead, they are increasingly managing outside contractors who are doing the development work. At the same time, the report identifies a “major reduction” over the past few decades in basic research that is financed by industry – a trend that the report says is “largely attributable to shareholders seeking near-term returns as opposed to laying groundwork for the future”.

Yet the committee also finds that NASA faces “internal and external pressure to prioritize short-term measures” without considering longer-term needs and implications. “If left unchecked these pressures are likely to result in a NASA that is incapable of satisfying national objectives in the longer term,” the report states. “The inevitable consequence of such a strategy is to erode those essential capabilities that led to the organization’s greatness in the first place and that underpin its future potential.”

Cash woes

Another concern is the US government budget process that operates year by year and is slowly reducing NASA’s proportional share of funding. The report finds that the budget is “often incompatible with the scope, complexity, and difficulty of [NASA’s] work” and the funding allocation “has degraded NASA’s capabilities to the point where agency sustainability is in question”. Indeed, during the agency’s lifetime, the proportion of the US budget devoted to government R&D has declined from 1.9% of gross domestic product to 0.7%. The panel also notes a trend of reducing investment in research and technology as a fraction of funds devoted to missions. “NASA is likely to face budgetary problems in the future that greatly exceed those we’ve seen in recent years,” Augustine told a briefing.

The panel now calls on NASA to work with Congress to establish “an annually replenished revolving fund – such as a working capital fund” to maintain and improve the agency’s infrastructure. It would be financed by the US government as well as users of NASA’s facilities and be “sufficiently capitalized to eliminate NASA’s current maintenance backlog over the next decade”. While it is unclear how the government and the agency will react to that proposal, as Augustine warned, for NASA, “this is not business as usual”.

The post NASA suffering from ageing infrastructure and inefficient management practices, finds report appeared first on Physics World.

]]>
News NASA has been warned that it may need to sacrifice new missions in order to rebalance the space agency’s priorities and achieve its long-term objectives https://physicsworld.com/wp-content/uploads/2024/09/2024-09-16-NASA-report.jpg newsletter
Stop this historic science site in St Petersburg from being sold https://physicsworld.com/a/stop-this-historic-science-site-in-st-petersburg-from-being-sold/ Mon, 16 Sep 2024 07:00:48 +0000 https://physicsworld.com/?p=116484 A historic scientific landmark may soon disappear, says Robert P Crease

The post Stop this historic science site in St Petersburg from being sold appeared first on Physics World.

]]>
In the middle of one of the most expensive neighbourhoods in St Petersburg, Russia, is a vacant and poorly kept lot about half an acre in size. It’s been empty for years for a reason: on it stood the first scientific research laboratory in Russia – maybe even the world – and for over two and a half centuries generations of Russian scientists hoped to restore it. But its days as an empty lot may be over, for the land could soon be sold to the highest bidder.

The laboratory was the idea of Mikhail Lomonosov (1711–1765), Russia’s first scientist in the modern sense. Born in 1711 into a shipping family on an island in the far north of Russia, Lomonosov developed a passion for science that saw him study in Moscow, Kyiv and St Petersburg. He then moved to Germany, where he got involved in the then revolutionary, mathematically informed notion that matter is made up of smaller elements called “corpuscles”.

In 1741, at the age of 30, Lomonosov returned to Russia, where he joined the St Petersburg Academy of Science. There he began agitating for the academy to set up a physico-chemistry laboratory of its own. Until then, experimental labs in Russia and elsewhere had been mainly applied institutions for testing and developing paints, dyes and glasses, and for producing medicines and chemicals for gunpowder. But Lomonosov wanted something very different.

His idea was for a lab devoted entirely to basic research and development that could engage and train students to do empirical research on materials. Most importantly, he wanted the academy to run the lab, but the state to pay for it. After years of agitating, Lomonosov’s plan was approved, and the St Petersburg laboratory opened in 1748 on a handy site in the centre of St Petersburg, just a 20-minute walk from the academy, near the university, museums and the city’s famous bridges.

The laboratory was a remarkable place, equipped with furnaces, ovens, scales, thermometers, microscopes, grindstones and other instruments for studying materials

The laboratory was a remarkable place, equipped with furnaces, ovens, scales, thermometers, microscopes, grindstones and various other instruments for studying materials and their properties. Lomonosov and his students used these to analyse ores, minerals, silicates, porcelain, silicates, glasses and mosaics. He also carried out experiments with implications for fundamental theory.

In 1756, for instance, Lomonosov found that certain experiments involving the oxidation of lead carried out by the British chemist Robert Boyle were in error. Indirectly, Lomonosov also suggested a general law of conservation covering the total weight of chemically reacting substances. The law is, these days, usually attributed to the French chemist Antoine Lavoisier, who also came up with the notion three decades later. But Lomonosov’s work had suggested it.

A symbol for science

Lomonosov left the formal leadership of the laboratory in 1757, after which it was headed by several other academy professors. The lab continued to serve the academy’s research until 1793 when several misfortunes, including a flood and a robbery, led to it running down. Still, the lab has had huge significance as a symbol that Russian scientists have appealed to ever since as a model for more state support. It also inspired the setting-up of other chemical laboratories, including a similar facility built at Moscow University in 1755.

For the last two and a half centuries, however, the laboratory’s allies have struggled to keep the site from becoming just real estate in a pricey St Petersburg neighbourhood. In 1793 an academician bought the land from the Academy of Sciences and rebuilt the lab as housing, although preserving its foundations and the old walls. Over the next century, a series of private owners owned the plot, again rebuilding the laboratory and associated house.

The area was levelled again during the Siege of Leningrad in the Second World War, though the lab’s foundations remained intact. After the war, the Soviet Union tried to reconstruct the lab, as did the Russian Academy of Sciences. More recently, advocates have tried to rebuild the lab in time for the 300th anniversary of the Russian Academy of Science, which takes place in 2024–2025.

Three photos of a disused plot of land in a city

All these attempts have failed. Meanwhile, ownership of the site was passed around several Russian administrative agencies, most recently to the Russian State Pedagogical University. Last March, the university put the land in the hands of a private real estate agent who advertised the site in a public notice with the statement that the land was “intended for scientific facilities”, without reference to the lab. The plot is supposed to open for bids this fall.

But scientists and historians worry about the vagueness of that phrase and are distrustful of its source. There is nothing to stop the university from succumbing to the extremely high market prices that developers would pay for its enticing location in the centre of St Petersburg.

The critical point

Money, wrote Karl Marx in his famous article on the subject, is “the confounding and confusing of all natural and human qualities”. As he saw it, money strips what it is used for of ties to human life and meaning. Monetizing Lomonosov’s lab makes us speak of it quantitatively in real-estate terms. In such language, the site is simply a flat, featureless half-acre plot of land that, one metre down, has pieces of stone that were once part of an earlier building.

It also encourages us to speak of the history of this plot as just a series of owners, buildings and events. Some might even say that we have already preserved the history of Lomonosov’s lab because much of its surviving contents are on display in a nearby museum called the Kunstkamera (or art chamber). What, therefore, could be the harm of selling the land?

The land is where Lomonosov, his spirited colleagues and students, shared experiences and techniques, made friendships and established networks

Turning the history of science into nothing more than a tale of instruments promotes the view that science is all about clever individuals who use tools to probe the world for knowledge. But the places where scientists work are integral to science too. The plot of land on the 2nd avenue of Vasilevsky Island is where Lomonosov, his spirited colleagues and students, shared experiences and techniques, made friendships and established networks.

It’s where humans, instruments, materials and funding came together in dynamic events that revealed new knowledge of how materials behave in different conditions. The lab is also historically important because it impressed academy and state authorities enough that they continued to support scientific research as essential to Russia’s future.

Sure, appreciating this dimension of science history requires more than restoring buildings. But preserving the places where science happens keeps alive important symbols of what makes science possible, then and now, in a world that needs more of it. Selling the site of Lomonosov’s lab for money amounts to repudiating the cultural value of science.

The post Stop this historic science site in St Petersburg from being sold appeared first on Physics World.

]]>
Opinion and reviews A historic scientific landmark may soon disappear, says Robert P Crease https://physicsworld.com/wp-content/uploads/2024/08/2024-09-CP-Lomonosov-Lab-model.jpg newsletter
What happens when a warp drive collapses? https://physicsworld.com/a/what-happens-when-a-warp-drive-collapses/ Sat, 14 Sep 2024 13:02:20 +0000 https://physicsworld.com/?p=116750 It emits gravitational waves, say physicists

The post What happens when a warp drive collapses? appeared first on Physics World.

]]>
Simulations of space–times that contain negative energies can help us to better understand wormholes or the interior of black holes. For now, however, the physicists who performed the new study, who admit to being big fans of Star Trek, have used their result to model the gravitational waves that would be emitted by a hypothetical failing warp drive.

Gravitational waves, which are ripples in the fabric of space–time, are emitted by cataclysmic events in the universe, like binary black hole and neutron star mergers. They might also be emitted by more exotic space–times such as wormholes or warp drives, which unlike black hole and neutron mergers, are still the stuff of science fiction.

First predicted by Albert Einstein in his general theory of relativity, gravitational waves were observed directly in 2015 by the Advanced LIGO detectors, which are laser interferometers comprising pairs of several-kilometre-long arms positioned at right angles to each other. As a gravitational wave passes through the detector, it slightly expands one arm while contracting the other. This creates a series of oscillations in the lengths of the arms that can be recorded as interference pattern variations.

The first detection by LIGO arose from the collision and merging of two black holes. These observations heralded the start of the era of gravitational-wave astronomy and viewing extreme gravitational events across the entire visible universe. Since then, astrophysicists have been asking themselves if signals from other strongly distorted regions of space–time could be seen in the future, beyond the compact binary mergers already detected.

Warp drives or bubbles

A “warp drive” (or “warp bubble”) is a hypothetical device that could allow space travellers to traverse space at faster-than-light speeds – as measured by some distant observer. Such a bubble contracts spacetime in front of it and expands spacetime behind it. It can do this, in theory, because unlike objects within space–time, space–time itself can bend, expand or contract at any speed. A spacecraft contained in such a drive could therefore arrive at its destination faster than light would in normal space without breaking Einstein’s cosmic speed limit.

The idea of warp drives is not new. They were first proposed in 1994 by the Mexican physicist Miguel Alcubierre who named them after the mode of travel used in the sci-fi series Star Trek. We are not likely to see such drives anytime soon, however, since the only way to produce them is by generating vast amounts of negative energy – perhaps by using some sort of undiscovered exotic matter.

A warp drive that is functioning normally, and travelling at a constant velocity, does not emit any gravitational waves. When it collapses, accelerates or decelerates, however, this should generate gravitational waves.

A team of physicists from Queen Mary University of London (QMUL), the University of Potsdam, the Max Planck Institute (MPI) for Gravitational Physics in Potsdam and Cardiff University decided to study the case of a collapsing warp drive. The warp drive is interesting, say the researchers, since it uses gravitational distortion of spacetime to propel a spaceship forward, rather than a usual kind of fuel/reaction system.

Decomposing spacetime

The team, led by Katy Clough of QMUL, Tim Dietrich from Potsdam and Sebastian Khan at Cardiff, began by describing the initial bubble by the original Alcubierre definition and gave it a fixed wall thickness. They then developed a formalism to describe the warp fluid and how it evolved. They varied its initial velocity at the point of collapse (which is related to the amplitude of the warp bubble). Finally, they analysed the resulting gravitational-wave signatures and quantified the radiation of energy from the space–time region.

While Einstein’s equations of general relativity treat space and time on an equal footing, we have to split the time and space dimensions to do a proper simulation of how the system evolves, explains Dietrich. This approach is normally referred to as the 3+1 decomposition of spacetime. “We followed this very common approach, which is routinely used to study binary black hole or binary neutron star mergers.”

It was not that simple, however: “given the particular spacetime that we were investigating, we also had to determine additional equations for the simulation of the material that is sustaining the warp bubble from collapse,” says Dietrich. “We also had to find a way to introduce the collapse that then triggers the emission of gravitational waves.”

Since they were solving Einstein’s field equation directly, the researchers say they could read off how spacetime evolves and the gravitational waves emitted from their simulation.

Very speculative work

Dietrich says that he and his colleagues are big Star Trek fans and that the idea for the project, which they detail in The Open Journal of Astrophysics, came to them a few years ago in Göttingen in Germany, where Clough was doing her postdoc. “Sebastian then had the idea of using the simulations that we normally use to help detect black holes to look for signatures of the Alcubierre warp drive metric,” recalls Dietrich. “We thought it would be a quick project, but it turned out to be much harder than we expected.”

The researchers found that, for warp ships around a kilometre in size, the gravitational waves emitted are of a high frequency and, therefore, not detectable with current gravitational-wave detectors. “While there are proposals for new gravitational-wave detectors at higher frequencies, our work is very speculative, and so it probably wouldn’t be sufficient to motivate anyone to build anything,” says Dietrich. “It does have a number of theoretical implications for our understanding of exotic spacetimes though,” he adds. “Since this is one of the few cases in which consistent simulations have been performed for spacetimes containing exotic forms of matter, namely negative energy, our work could be extended to also study wormholes, the inside of black holes, or the very early stages of the universe, where negative energy might prevent the formation of singularities.

Even though they “had a lot of fun” during this proof-of-principle project, the researchers say that they will now probably go back to their “normal” work, namely the study of compact binary systems.

The post What happens when a warp drive collapses? appeared first on Physics World.

]]>
Research update It emits gravitational waves, say physicists https://physicsworld.com/wp-content/uploads/2024/09/Gravitational-wave.jpg newsletter1
UK reveals next STEPs toward prototype fusion power plant https://physicsworld.com/a/uk-reveals-next-steps-toward-prototype-fusion-power-plant/ Fri, 13 Sep 2024 08:30:01 +0000 https://physicsworld.com/?p=116737 Engineers and physicists have met to discuss the challenges and opportunities of building a practical fusion power plant in the UK

The post UK reveals next STEPs toward prototype fusion power plant appeared first on Physics World.

]]>
“Fiendish”, “technically tough”, “difficult”, “complicated”. Those were just a few of the choice words used at an event last week in Oxfordshire, UK, to describe ambitious plans to build a prototype fusion power plant. Held at the UK Atomic Energy Authority (UKAEA) Culham campus, the half-day meeting on 5 September saw engineers and physicists discuss the challenges that lie ahead as well the opportunities that this fusion “moonshot” represents.

The prototype fusion plant in question is known as the Spherical Tokamak for Energy Production (STEP), which was first announced by the UK government in 2019 when it unveiled a £220m package of funding for the project. STEP will be based on “spherical” tokamak technology currently being pioneered at the UK’s Culham Centre for Fusion Energy (CCFE). In 2022 a site for STEP was chosen at the former coal-fired power station at West Burton in Nottinghamshire. Operations are expected to begin in the 2040s with STEP aiming to prove the commercial viability of fusion by demonstrating net energy, fuel self-sufficiency and a viable route to plant maintenance.

A spherical tokamak is more compact than a traditional tokamak, such as the ITER experimental fusion reactor currently being built in Cadarache, France, which has been hit with cost hikes and delays in recent years. The compact nature of the spherical tokamak, which was first pioneered in the UK in the 1980s, is expected to minimize costs, maximise energy output and possibly make it easier to maintain when scaled up to a fully-fledged fusion power plant.

The current leading spherical tokamaks worldwide are the Mega Amp Spherical Tokamak (MAST-U) at the CCFE and the National Spherical Torus Experiment at the Princeton Plasma Physics Laboratory (PPPL) in the US, which is nearing the completion of an upgrade. Despite much progress, however, those tokamaks are yet to demonstrate fusion conditions through the use of the hydrogen isotope tritium in the fuel, which is necessary to achieve a “burning” plasma. This goal has, though, already been achieved in traditional tokamaks such as the Joint European Torus, which turned off in 2023.

“STEP is a big extrapolation from today’s machines,” admitted STEP chief engineer Chris Waldon at the event. “It is complex and complicated but we are now beginning to converge on a single design [for STEP]”.

A fusion ‘moonshot’

The meeting at Culham was held to mark the publication of 15 papers on the technical progress made on STEP over the past four years. They cover STEP’s plasma, its maintenance, magnets, tritium-breeding programme as well as pathways for fuel self-sufficiency (Philosophical Transactions A 382 20230416). Officials were keen to stress, however, that the papers were a snapshot of progress to date and that since then some aspects of the design have progressed.

One issue that crept up during the talks was the challenge of extrapolating every element of tokamak technology to STEP – a feat described by one panellist as being “so far off our graphs”. While theory and modelling have come a long way in the last decade, even the best models will not be a substitute for the real thing. “Until we do STEP we won’t know everything,” says physicist Steve Cowley, director of the PPPL. Those challenges involve managing potential instabilities and disruptions in the plasma – which at worst could obliterate the wall of a reactor – as well as operating high-temperature superconducting magnets to confine the plasma that have yet to be tested under the intensity of fusion conditions.

We need to produce a project that will deliver energy someone will buy

Ian Chapman

Another significant challenge is self-breeding tritium via neutron capture in lithium, which would be done in a roughly one-metre thick “blanket” surrounding the reactor. This is far from straightforward and the STEP team are still researching what technology might prevail – whether to use a solid pebble-bed or liquid lithium. While liquid lithium is good at producing tritium, for example, extracting the isotope to put back into the reactor is complex.

Howard Wilson, fusion pilot plant R&D lead at the Oak Ridge National Laboratory in the US, was keen to stress that STEP will not be a commercial power plant. Instead, its job rather is to demonstrate “a pathway towards commercialisation”. That is likely to come in several stages, the first being to generate 1 GW of power, which would result in 100 MW to the “grid” (the other 900 MW needed to power the systems). The second stage will be to test if that power production is sustainable via the self-breeding of tritium back into the reactor, what is known as a “closed fuel cycle”.

Ian Chapman, chief executive of the UKAEA, outlined what he called the “fiendish” challenges that lie ahead for fusion, even if STEP demonstrates that it is possible to deliver energy to the grid in a sustainable way. “We need to produce a project that will deliver energy someone will buy,” he said. That will be achieved in part via STEP’s third objective, which is to get a better understanding of the maintenance requirements of a fusion power plant and the impact that would have on reactor downtime. “We fail if there is not cost-effective solution,” added STEP engineering director Debbie Kempton.

STEP officials are now selecting industry partners — in engineering and construction — to work alongside the UKAEA to work on the design. Indeed, STEP is as much about physically building a plant as it is creating a whole fusion industry. A breathless two-minute pre-event promotional film — that loftily compared the development of fusion to the advent of the steam train and vaccines — was certainly given a much needed reality check.

The post UK reveals next STEPs toward prototype fusion power plant appeared first on Physics World.

]]>
Analysis Engineers and physicists have met to discuss the challenges and opportunities of building a practical fusion power plant in the UK https://physicsworld.com/wp-content/uploads/2024/09/STEP.jpeg newsletter1
Annular eclipse photograph bags Royal Observatory Greenwich prize https://physicsworld.com/a/annular-eclipse-photograph-bags-royal-observatory-greenwich-prize/ Thu, 12 Sep 2024 18:30:32 +0000 https://physicsworld.com/?p=116707 The image captures the progression of Baily’s beads, which are only visible when the Moon either enters or exits an eclipse

The post Annular eclipse photograph bags Royal Observatory Greenwich prize appeared first on Physics World.

]]>
US photographer Ryan Imperio has beaten thousands of amateur and professional photographers from around the world to win the 2024 Astronomy Photographer of the Year.

The image – Distorted Shadows of the Moon’s Surface Created by an Annular Eclipse – was taken during the 2023 annular eclipse.

It captures the progression of Baily’s beads, which are only visible when the Moon either enters or exits an eclipse. They are formed when sunlight shines through the valleys and craters of the Moon’s surface, breaking the eclipse’s well-known ring pattern.

“This is an impressive dissection of the fleeting few seconds during the visibility of the Baily’s beads,” noted meteorologist and competition judge Kerry-Ann Lecky Hepburn. “This image left me captivated and amazed. It’s exceptional work deserving of high recognition.”

As well as winning the £10,000 top prize, the image will go on display along with other selected pictures from the competition at an exhibition at the National Maritime Museum observatory that opens on 13 September.

The award – now in its 16th year – is run by the Royal Observatory Greenwich in association with insurer Liberty Specialty Markets and BBC Sky at Night Magazine.

The competition received over 3500 entries from 58 countries.

The post Annular eclipse photograph bags Royal Observatory Greenwich prize appeared first on Physics World.

]]>
Blog The image captures the progression of Baily’s beads, which are only visible when the Moon either enters or exits an eclipse https://physicsworld.com/wp-content/uploads/2024/09/Distorted-Shadows-of-the-Moons-Surface-Created-by-an-Annular-Eclipse-©-Ryan-Imperio.jpg
Looking to the future of statistical physics, how intense storms can affect your cup of tea https://physicsworld.com/a/looking-to-the-future-of-statistical-physics-how-intense-storms-can-affect-your-cup-of-tea/ Thu, 12 Sep 2024 14:47:40 +0000 https://physicsworld.com/?p=116741 In this podcast we chat about active matter, artificial intelligence and storm Ciarán

The post Looking to the future of statistical physics, how intense storms can affect your cup of tea appeared first on Physics World.

]]>
In this episode of the Physics World Weekly podcast we explore two related areas of physics, statistical physics and thermodynamics.

First up we have two leading lights in statistical physics who explain how researchers in the field are studying phenomena as diverse as active matter and artificial intelligence.

They are Leticia Cugliandolo who is at Sorbonne University in Paris and Marc Mézard at Bocconi University in Italy.

Cugliandolo is also chief scientific director of Journal of Statistical Mechanics, Theory, and Experiment (JSTAT) and Mézard has just stepped down from that role. They both talk about how the journal and statistical physics have evolved over the past two decades and what the future could bring.

The second segment of this episode explores how intense storms can affect your cup of tea. Our guests are the meteorologists Caleb Miller and Giles Harrison, who measured the boiling point of water as storm Ciarán passed through the University of Reading in 2023. They explain the thermodynamics of what they found, and how the storm could have affected the quality of the millions of cups of tea brewed that day.

The post Looking to the future of statistical physics, how intense storms can affect your cup of tea appeared first on Physics World.

]]>
Podcasts In this podcast we chat about active matter, artificial intelligence and storm Ciarán https://physicsworld.com/wp-content/uploads/2024/09/12-9-24-maths-and-physics-equations-131120717-Shutterstock_agsandrew.jpg
Carbon defect in boron nitride creates first omnidirectional magnetometer https://physicsworld.com/a/carbon-defect-in-boron-nitride-creates-first-omnidirectional-magnetometer/ Thu, 12 Sep 2024 09:43:45 +0000 https://physicsworld.com/?p=116696 Quantum sensor can detect magnetic fields in any direction and monitor temperature changes in a sample at the same time

The post Carbon defect in boron nitride creates first omnidirectional magnetometer appeared first on Physics World.

]]>
A newly discovered carbon-based defect in the two-dimensional material hexagonal boron nitride (hBN) could be used as a quantum sensor to detect magnetic fields in any direction – a feat that is not possible with existing quantum sensing devices. Developed by a research team in Australia, the sensor can also detect temperature changes in a sample using the boron vacancy defect present in hBN. And thanks to its atomically thin structure, the sensor can conform to the shape of a sample, making it useful for probing structures that aren’t perfectly smooth.

The most sensitive magnetic field detectors available today exploit quantum effects to map the presence of extremely weak fields. To date, most of these have been made out of diamond and rely on the nitrogen vacancy (NV) centres contained within. NV centres are naturally occurring defects in the diamond lattice in which two carbon atoms are replaced with a single nitrogen atom, leaving one lattice site vacant. Together, the nitrogen atom and the vacancy can behave as a negatively charged entity with an intrinsic spin. NV centres are isolated from their surroundings, which means that their quantum behaviour is robust and stable.

When a photon hits an NV– centre, it can excite an electron to a higher-energy state. As it then decays back to the ground state, it may emit a photon of a different wavelength. The NV– centre has three spin sublevels, and the excited state of each sublevel has a different probability of emitting a photon when it decays.

By exciting an individual NV– centre repeatedly and collecting the emitted photons, researchers can detect its spin state. And since the spin state can be influenced by external variables such as magnetic field, electric field, temperature, force and pressure, NV– centres can therefore be used as atomic-scale sensors. Indeed, they are routinely employed today to study a wide variety of biological and physical systems.

There is a problem though – NV centres can only detect magnetic fields that are aligned in the same direction as the sensors. Devices must therefore contain many sensors placed at different alignment angles, which makes them difficult to use and limited to specific applications. What’s more, the fact that they are rigid (diamond being the hardest material known), means they cannot conform to the sample being studied.

A new carbon-based defect

Researchers recently discovered a new carbon-based defect in hBN, in addition to the boron vacancy that it is already known to contain. In this latest work, and thanks to a carefully calibrated Rabi experiment (a method for measuring nuclear spin), a team led by Jean-Philippe Tetienne of RMIT University and Igor Aharonovich of the University of Technology Sydney found that the carbon-based defect behaves as a spin-half system (S=1/2). In comparison, the spin in the boron defect is equal to one. And it’s this spin-half nature of the former that enables it to detect magnetic fields in any direction, say the researchers.

Team members Sam Scholten and Priya Singh

“Having two different independently addressable spin species within the same material at room temperature is unique, not even diamond has this capability,” explains Priya Singh from RMIT University, one of the lead authors of this study. “This is exciting because each spin species has its advantages and limitations, and so with hBN we can combine the best of both worlds. This is important especially for quantum sensing, where the spin half enables omnidirectional magnetometry, with no blind spot, while the spin one provides directional information when needed and is also a good temperature sensor.”

Until now, the spin multiplicity of the carbon defect was under debate in the hBN community, adds co-first author Sam Scholten from the University of Melbourne. “We have been able to unambiguously prove its spin-half nature, or more likely a pair of weakly coupled spin-half electrons.”

The new S=1/2 sensor can be controlled using light in the same way as the boron vacancy-based sensor. What’s more, the two defects can be tuned to interact with each other and thus used together to detect both magnetic fields and temperature at the same time. Singh points out that the carbon-based defects were also naturally present in pretty much every hBN sample the team studied, from commercially sourced bulk crystals and powders to lab-made epitaxial films. “To create the boron vacancy defects in the same sample, we had to perform just one extra step, namely irradiating the samples with high-energy electrons, and that’s it,” she explains.

To create the hBN sensor, the researchers simply drop casted a hBN powder suspension onto the target object or transferred an epitaxial film or an exfoliated flake. “hBN is very versatile and easy to work with,” says Singh. “It is also low cost and easy to integrate with various other materials so we expect lots of applications will emerge in nanoscale sensing – especially thanks to the omnidirectional magnetometry capability, unique for solid-state quantum sensors.”

The researchers are now trying to determine the exact crystallographic structure of the S=1/2 carbon defects and how they can engineer them on-demand in a few layers of hBN. “We are also planning sensing experiments that leverage the omnidirectional magnetometry capability,” says Scholten. “For instance, we can now image the stray field from a van der Waals ferromagnet as a function of the azimuthal angle of the applied field. In this way, we can precisely determine the magnetic anisotropy, something that has been a challenge with other methods in the case of ultrathin materials.”

The study is detailed in Nature Communications.

The post Carbon defect in boron nitride creates first omnidirectional magnetometer appeared first on Physics World.

]]>
Research update Quantum sensor can detect magnetic fields in any direction and monitor temperature changes in a sample at the same time https://physicsworld.com/wp-content/uploads/2024/09/Low-Res_hBN-quantum-sensor-set.jpg newsletter1
Dancing humans embody topological properties https://physicsworld.com/a/dancing-humans-embody-topological-properties/ Wed, 11 Sep 2024 16:08:29 +0000 https://physicsworld.com/?p=116665 Choreographed high school students have fun simulating curious phase of matter

The post Dancing humans embody topological properties appeared first on Physics World.

]]>
High school students and scientists in the US have used dance to illustrate the physics of topological insulators. The students followed carefully choreographed instructions developed by scientists in what was a fun outreach activity that explained topological phenomena. The exercise demonstrates an alternative analogue for topologically nontrivial systems, which could be potentially useful for research.

“We thought that the way all of these phenomena are explained is rather contrived, and we wanted to, in some sense, democratize the notions of topological phases of matter to a broader audience,” says Joel Yuen-Zhou who is a theoretical chemist at the University of California, San Diego (UCSD). Yuen-Zhou led the research, which was done in collaboration with students and staff at Orange Glen High School near San Diego.

Topological insulators are a type of topological material where the bulk is an electrical  insulator but the surface or edges (depending on whether the system is 3D or 2D) conducts electricity. The conducting states arise due to a characteristic of the electronic band structure associated with the system as a whole, which means they persist despite defects or distortions in the system so long as the fundamental topology of the system is undisturbed. Topology can be understood in terms of a coffee mug being topologically equivalent to a ring doughnut, because they both have a hole all the way through. This is unlike a jam doughnut which does not have a hole and is therefore not topologically equivalent to a coffee mug.

Insulators without the conducting edge or surface states are “topologically trivial” and have insulating properties throughout. Yuen-Zhou explains that for topologically nontrivial properties to emerge, the system must be able to support wave phenomena and have something that fulfils the role of a magnetic field in condensed matter topological insulators. As such, analogues of topological insulators have been reported in systems ranging from oceanic and atmospheric fluids to enantiomeric molecules and active matter. Nonetheless, and despite the interest in topological properties for potential applications, they can still seem abstract and arcane.

Human analogue

Yuen-Zhou set about devising a human analogue of a topological insulator with then PhD student Matthew Du, who is now at the University of Chicago. The first step was to establish a Hamiltonian that defines how each site in a 2D lattice interacts with its neighbours and a magnetic field. They then formulated the Schrödinger equation of the system as an algorithm that updates after discrete steps in time and reproduces essential features of topological insulator behaviour. These are chiral propagation around the edges when initially excited at an edge; robustness to defects; propagation around the inside edge when the lattice has a hole in it; and an insulating bulk.

The USCD researchers then explored how this quantum behaviour could be translated into human behaviour. This was a challenge because quantum mechanics operates in the realm of complex numbers that have real and an imaginary components. Fortunately, they were able to identify initial conditions that lead to only real number values for the interactions at each time step of the algorithm. That way the humans, for whom imaginary interactions might be hard to simulate, could legitimately manifest only real numbers as they step through the algorithm. These real values were either one (choreographed as waving flags up), minus one (waving flags down) or zero (standing still).

“The structure isn’t actually specific just to the model that we focus on,” explains Du. “There’s actually a whole class of these kinds of models, and we demonstrate this for another example – actually a more famous model – the Haldane model, which has a honeycomb lattice.”

The researchers then created a grid on a floor at Orange Glen High School, with lines in blue or red joining neighbouring squares. They defined whether the interaction between those sites was parallel or antiparallel (that is, whether the occupants of the squares should wave the flags in the same or opposite direction to each other when prompted).

Commander and caller

A “commander” acts as the initial excitation that starts things off. This is prompted by someone who is not part of the 2D lattice, whom the researchers liken to a caller in line, square or contra dancing. The caller then prompts the commander to come to a standstill, at which point all those who have their flags waving determine if they have a “match”, that is, if they are dancing in kind or opposite to their neighbours as designated by the blue and red lines. Those with a match then stop moving, after which the “commander” or excitation moves to the one site where there is no such match.

Yuen-Zhou and Du taught the choreography to second and third year high school students. The result was that excitations propagated around the edge of the lattice, but bulk excitations fizzled out. There was also a resistance to “defects”.

“The main point about topological properties is that they are characterized by mathematics that are insensitive to many details,” says Yuen-Zhou. “While we choreograph the dance, even if there are imperfections and the students mess up, the dance remains and there is the flow of the dance along the edges of the group of people.”

The researchers were excited about showing that even a system as familiar as a group of people could provide an analogue of a topological material, since so far these properties have been “restricted to very highly engineered systems or very exotic materials,” as Yuen-Zhou points out.

“The mapping of a wave function to real numbers then to human movements clearly indicates the thought process of the researchers to make it more meaningful to students as an outreach activity,” says Shanti Pise, a principal technical officer at the Indian Institute of Science, Education and Research in Pune. She was not involved in this research project but specializes in using dance to teach mathematical ideas. “I think this unique integration of wave physics and dance would also give a direction to many researchers, teachers and the general audience to think, experiment and share their ideas!”

The research is described in Science Advances.

The post Dancing humans embody topological properties appeared first on Physics World.

]]>
Research update Choreographed high school students have fun simulating curious phase of matter https://physicsworld.com/wp-content/uploads/2024/09/crowd-of-people-connections-1166492679-iStock_gremlin.jpg newsletter1
Almost 70% of US students with an interest in physics leave the subject, finds survey https://physicsworld.com/a/almost-70-of-us-students-with-an-interest-in-physics-leave-the-subject-finds-survey/ Wed, 11 Sep 2024 11:30:40 +0000 https://physicsworld.com/?p=116659 The survey followed almost 4000 first-year students taking introductory physics courses at four US universities

The post Almost 70% of US students with an interest in physics leave the subject, finds survey appeared first on Physics World.

]]>
More than two-thirds of college students in the US who initially express an interest in studying physics drop out to pursue another degree. That is according to a five-year-long survey by the American Institute of Physics, which found that students often quit due to a lack of confidence in mathematics or having poor experiences within physics departments and instructors. Most students, however, ended up in another science, technology, engineering and mathematics (STEM) field.

Carried out by AIP Statistical Research, the survey initially followed almost 4000 students in their first year of high school or college who were doing an introductory physics course at four large, predominantly white universities.

Students highlighted “learning about the universe”, “applying their problem-solving and maths skills”, “succeeding in a challenging subject” and “pursuing a satisfying career” as reasons why they choose to study physics.

Anne Marie Porter and her colleagues Raymond Chu and Rachel Ivie concentrated on the 745 students who had expressed interest in  pursuing physics, following them for five academic years.

Over that period, only 31% graduated with a physics degree, with most of those switching to another degree during their first or second year. Under-represented groups, including women, African-American and Hispanic students, were the most likely to avoid physics degree courses.

Pull and push

While many who quit physics enjoyed their experience, they left due to “issues with poor teaching quality and large class sizes” as well as “negative perceptions that physics employment consists only of academic positions and desk jobs”. Self-appraisal played a role in the decision to leave too. “They may feel unable to succeed because they lack the necessary skills in physics,” Porter says. “That’s a reason for concern.”

Porter adds that intervention early in college is essential to retain physicists with introductory physics courses being “incredibly important”. Indeed, the survey comes at a time when the number of bachelor’s degrees in physics offered by US universities is growing more slowly than in other STEM fields.

Meanwhile, a separate report published by the National Academies of Science, Engineering, and Medicine has called on the US government to adopt a new strategy to recruit and retain talent in STEM subjects. In particular, the report urges Congress to smooth the path to permanent residency and US citizenship for foreign-born individuals working in STEM fields.

The post Almost 70% of US students with an interest in physics leave the subject, finds survey appeared first on Physics World.

]]>
News The survey followed almost 4000 first-year students taking introductory physics courses at four US universities https://physicsworld.com/wp-content/uploads/2024/09/student-in-library-24422647-iStock_Sam-Edwards.jpg newsletter
Improved antiproton trap could shed more light on antimatter-matter asymmetry https://physicsworld.com/a/improved-antiproton-trap-could-shed-more-light-on-antimatter-matter-asymmetry/ Wed, 11 Sep 2024 08:30:14 +0000 https://physicsworld.com/?p=116667 Maxwell's demon cooling trap measures the magnetic moment of antiprotons with higher precision than ever before

The post Improved antiproton trap could shed more light on antimatter-matter asymmetry appeared first on Physics World.

]]>
The "Maxwell’s demon cooling double trap" developed by the BASE collaboration can cool antiprotons very quickly

A novel particle trap invented at CERN could allow physicists to measure the magnetic moments of antiprotons with higher precision than ever before. The experiment, carried out by the international BASE collaboration, revealed that the magnetic moments of the antiparticles differ by a maximum of 10–9 from those of their matter counterparts.

One of the biggest mysteries in physics today is why the universe appears to be made up almost entirely of matter and contains only tiny amounts of antimatter. According to the Standard Model, our universe should be largely matter-less. This is because when the universe formed nearly 14 billion years ago, equal amounts of antimatter and matter were generated. When pairs of these antimatter and matter particles collided, they annihilated and produced a burst of energy. This energy created new antimatter and matter particles, which annihilated each other again, and so on.

Physicists have been trying to solve this enigma by looking for tiny differences between a particle (such as a proton) and its antiparticle. If successful, such differences (even if extremely small) would shed more light on antimatter–matter asymmetry and perhaps even reveal physics beyond the Standard Model.

The aim of the BASE (Baryon Antibaryon Symmetry Experiment) collaboration is to measure the magnetic moment of the antiproton to extremely high precision and compare it with the magnetic moment of the proton. To do this, the researchers are using Penning traps, which employ magnetic and electric fields to hold a negatively charged antiproton, and can store antiprotons for years.

Quicker cooling

Preparing individual antiprotons so that their spin quantum states can be measured, however, involves cooling them down to extremely cold temperatures of 200 mK. Previous techniques took 15 h to achieve this, but BASE has now shortened this cooling time to just eight minutes.

The BASE team achieved this feat by joining two Penning traps to make a so-called “Maxwell’s demon cooling double trap”. The first trap cools the antiprotons. The second (referred to as the analysis trap in this study) has the highest magnetic field gradient for a device of its kind, as well as improved noise-protection electronics, a cryogenic cyclotron motion detector and ultrafast transport between the two traps.

The new instrument allowed the researchers to prepare only the coldest antiprotons for measurement, while at the same time rejecting any that had a higher temperature. This means that they did not have to waste time cooling down these warmer particles.

“With our new trap we need a measurement time of around one month, compared with almost 10 years using the old technique, which would be impossible to realize experimentally,” explains BASE spokesperson Stefan Ulmer, an experimental physicist at Heinrich Heine University Düsseldorf and a researcher at CERN and RIKEN.

Ulmer says that he and his colleagues have already been able to measure that the magnetic moments of protons and antiprotons differ by a maximum of one billionth (10–9). They have also improved the error rate in identifying the antiproton’s spin by more than a factor of 1000. Reducing this error rate was one of the team’s main motivations for this project.

The new cooling device could be of benefit to the Penning trap community at large, since colder particles generally result in more precise measurements. For example, it could be used for phase sensitive detection methods or spin state analysis, says Barbara Maria Latacz, CERN team member and lead author of this study. “Our trap is particularly interesting because it is relatively simple and robust compared to laser cooling systems,” she tells Physics World. “Specifically, it allows us to cool a single proton or antiproton to temperatures below 200 mK in less than eight minutes, which is not achievable with other cooling methods.”

The new device will now be a key element of the BASE experimental set-up, she says.

Looking forward, the researchers hope to improve the detection accuracy of the antiproton magnetic moment to 10–10 in their next measurement campaign. They report their current work in Physical Review Letters.

The post Improved antiproton trap could shed more light on antimatter-matter asymmetry appeared first on Physics World.

]]>
Research update Maxwell's demon cooling trap measures the magnetic moment of antiprotons with higher precision than ever before https://physicsworld.com/wp-content/uploads/2024/09/11-09-24-antiproton-trap-featured.jpg newsletter1
Vacuum for physics research https://physicsworld.com/a/vacuum-for-physics-research/ Wed, 11 Sep 2024 07:28:38 +0000 https://physicsworld.com/?p=116423 Join the audience for a live webinar on 8 October 2024 sponsored by Agilent Technologies

The post Vacuum for physics research appeared first on Physics World.

]]>

Your research can’t happen without vacuum! If you’re pushing the boundaries of science or technology, you know that creating a near-perfect empty space is crucial. Whether you’re exploring the mysteries of subatomic particles, simulating the harsh conditions of outer space, or developing advanced materials, mastering ultra-high (UHV) and extreme-high vacuum (XHV) is necessary.

In this live webinar:

  • You will learn how vacuum enables physics research, from quantum computing, to fusion, to the fundamental nature of the universe.
  • You will discover why ultra-low-pressure environments directly impact the success of your experiments.
  • We will dive into the latest techniques and technologies for creating and maintaining UVH and XHV.

Join us to gain practical insights and stay ahead in your field ­– because in your research, vacuum isn’t just important; it’s critical.

John Screech graduated in 1986 with a BA in physics and has worked in analytical instrumentation ever since. His career has spanned general mass spectrometry, vacuum system development, and contraband detection. John joined Agilent in 2011 and currently leads training and education programmes for the Vacuum Products division. He also assists Agilent’s sales force and end-users with pre- and post-sales applications support. He is based near Toronto, Canada.

The post Vacuum for physics research appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 8 October 2024 sponsored by Agilent Technologies https://physicsworld.com/wp-content/uploads/2024/08/2024-09-19-webinar-image.jpg
Flagship journal Reports on Progress in Physics marks 90th anniversary with two-day celebration https://physicsworld.com/a/flagship-journal-reports-on-progress-in-physics-marks-90th-anniversary-with-two-day-celebration/ Tue, 10 Sep 2024 16:05:02 +0000 https://physicsworld.com/?p=116670 A new future lies in store for Reports on Progress in Physics as the journal turns 90

The post Flagship journal <em>Reports on Progress in Physics</em> marks 90th anniversary with two-day celebration appeared first on Physics World.

]]>
When the British physicist Edward Andrade wrote a review paper on the structure of the atom in the first volume of the journal Reports on Progress in Physics (ROPP) in 1934, he faced a problem familiar to anyone seeking to summarize the latest developments in a field. So much exciting research had happened in atomic physics that Andrade was finding it hard to cram everything in. “It is obvious, in view of the appalling number of papers that have appeared,” he wrote, “that only a small fraction can receive reference.”

Review articles are the ideal way to get up to speed with developments and offer a gateway into the scientific literature

Apologizing that “many elegant pieces of work have been deliberately omitted” due to a lack of space, Andrade pleaded that he had “honestly tried to maintain a just balance between the different schools [of thought]”. Nine decades on, Andrade’s struggles will be familiar to anyone has ever tried to write a review paper, especially of a fast-moving area of physics. Readers, however, appreciate the efforts authors put in because review articles are the ideal way to get up to speed with developments and offer a gateway into the scientific literature.

Writing review papers also benefits authors because such articles are usually widely read and cited by other scientists – much more in fact than a paper containing new research findings. As a result, most review journals have an extraordinarily high “impact factor”, which is the yearly mean number of citations received by articles published in the last two years in the journal. ROPP, for example, has an impact factor of 19.0. While there are flaws with using impact factor to judge the quality of a journal, it’s still a well-respected metric in many parts of the world. And who wouldn’t want to appear in a journal with that much influence?

New dawn for ROPP

Celebrating its 90th anniversary this year, ROPP is the flagship journal of IOP Publishing, which also publishes Physics World. As a learned-society publisher, IOP Publishing does not have shareholders, with any financial surplus ploughed back into the Institute of Physics (IOP) to support everyone from physics students to physics teachers. In contrast to journals owned by commercial publishers, therefore, ROPP has the international physics community at its heart.

Over the last nine decades, ROPP has published over 2500 review papers. There have been more than 20 articles by Nobel-prize-winning physicists, including famous figures from the past such as Hans Bethe (stellar evolution), Lawrence Bragg (protein crystallography) and Abdus Salam (field theory). More recently, ROPP has published papers by still-active Nobel laureates including Konstantin Novoselov (2D materials), Ferenc Krausz (attosecond physics) and Isamu Akasaki (blue LEDS) – see the box below for a full list.

Subhir Sachdev

But the journal isn’t resting on its laurels. ROPP has recently started accepting articles containing new scientific findings for the first time, with the plan being to publish 150–200 very-high-quality primary-research papers each year. They will be in addition to the usual output of 50 or so review papers, most of which will still be commissioned by ROPP’s active editorial board. IOP Publishing hopes the move will cement the journal’s place at the pinnacle of its publishing portfolio.

“ROPP will continue as before,” says Subir Sachdev, a condensed-matter physicist from Harvard University, who has been editor-in-chief of the journal since 2022. “There’s no change to the review format, but what we’re doing is really more of an expansion. We’re adding a new section containing original research articles.” The journal is also offering an open-access option for the first time, thereby increasing the impact of the work. In addition, authors have the option to submit their papers for “double anonymous” and transparent peer review.

Maintaining high standards

Those two new initiatives – publishing primary research and offering an open-access option – are probably the biggest changes in the journal’s 90-year history. But Sachdev is confident the journal can cope. “Of course, we want to maintain our high standards,” he says. “ROPP has over the years acquired a strong reputation for very-high-quality articles. With the strong editorial board and the support we have from referees, we hope we will be able to maintain that.”

Early signs are promising. Among the first primary-research papers in ROPP are CERN’s measurement of the speed of sound in a quark–gluon plasma (87 077801), a study into flaws in the Earth’s gravitational field (87 078301), and an investigation into whether supersymmetry could be seen in 2D materials (10.1088/1361-6633/ad77f0). A further paper looks into creating an overarching equation of state for liquids based on phonon theory (87 098001).

The idea is to publish a relatively small number of papers but ensure they’re the best of what’s going on in physics and provide a really good cross section of what the physics community is doing

David Gevaux

David Gevaux, ROPP’s chief editor, who is in charge of the day-to-day running of the journal, is pleased with the quality and variety of primary research published so far. “The idea is to publish a relatively small number of papers – no more than 200 max – but ensure they’re the best of what’s going on in physics and provide a really good cross section of what the physics community is doing,” he says. “Our first papers have covered a broad range of physics, from condensed matter to astronomy.”

Another benefit of ROPP only publishing a select number of papers is that each article can have, as Gevaux explains, “a little bit more love” put into it. “Traditionally, publishers were all about printing journals and sending them around the world – it was all about distribution,” he says. “But with the Internet, everything’s immediately available and researchers almost have too many papers to trawl through. As a flagship journal, ROPP gives its published authors extra visibility, potentially through a press release or coverage in Physics World.”

Nobel laureates who have published in ROPP

Since its launch in 1934, Reports on Progress in Physics has published papers by numerous top scientists, including more than 20 current or future Nobel-prize-winning physicists. A selection of those papers written or co-authored by Nobel laureates over the journal’s first 90 years is given chronologically below. For brevity, papers by multiple authors list only the contributing Nobel winner.

Nevill Mott 1938 “Recent theories of the liquid state” (5 46) and 1939 “Reactions in solids” (6 186)
Hans Bethe 1939 “The physics of stellar interiors and stellar evolution” (6 1)
Max Born 1942 “Theoretical investigations on the relation between crystal dynamics and x-ray scattering” (9 294)
Martin Ryle 1950 “Radio astronomy” (13 184)
Willis Lamb 1951 “Anomalous fine structure of hydrogen and singly ionized helium” (14 19)
Abdus Salam 1955 “A survey of field theory” (18 423)
Alexei Abrikosov 1959 “The theory of a fermi liquid” (22 329)
David Thouless 1964 “Green functions in low-energy nuclear physics” (27 53)
Lawrence Bragg 1965 “First stages in the x-ray analysis of proteins” (28 1)
Melvin Schwartz 1965 “Neutrino physics” (28 61)
Pierre-Gilles de Gennes 1969 “Some conformation problems for long macromolecules” (32 187)
David Gabor 1969 Progress in holography” (32 395)
John Clauser 1978 “Bell’s theorem. Experimental tests and implications” (41 1881)
Norman Ramsey 1982 “Electric-dipole moments of elementary particles” (45 95)
Martin Perl 1992 “The tau lepton” (55 653)
Charles Townes 1994 “The nucleus of our galaxy” (57 417)
Pierre Agostini 2004 “The physics of attosecond light pulses” (67 813)
Takaaki Kajita 2006 “Discovery of neutrino oscillations” (69 1607)
Konstantin Novoselov 2011 “New directions in science and technology: two-dimensional crystals” (7 082501)
John Michael Kosterlitz 2016 “Physics: a review of key issues” (2016 79 026001)
Anthony Leggett 2016 “Liquid helium-3: a strongly correlated but well understood Fermi liquid” (79 054501)
Ferenc Krausz 2017 “Attosecond physics at the nanoscale” (80 054401)
Isamu Akasaki 2018 GaN-based vertical-cavity surface-emitting lasers with AlInN/GaN distributed Bragg reflectors” (82 012502)

An event for the community

As another reminder of its place in the physics community, ROPP is hosting a two-day event at the IOP’s headquarters in London and online. Taking place on 9–10 October 2024, the hybrid event will present the latest cutting-edge condensed-matter research, from fundamental work to applications in superconductivity, topological insulators, superfluids, spintronics and beyond. Confirmed speakers at Progress in Physics 2024 include Piers Coleman (Rutgers University), Susannah Speller (University of Oxford), Nandini Trivedi (Ohio State University) and many more.

artist's impression of a superconductive cube levitiating

“We’re taking the journal out into the community,” says Gevaux. “IOP Publishing is very heavily associated with the IOP and of course the IOP has a large membership of physicists in the UK, Ireland and beyond. With the meeting, the idea is to bring that community and the journal together. This first meeting will focus on condensed-matter physics, with some of the ROPP board members giving plenary talks along with lectures from invited, external scientists and a poster session too.”

Longer-term, IOP Publishing plans to put ROPP at the top of a wider series of journals under the “Progress in” brand. The first of those journals is Progress in Energy, which was launched in 2019 and – like ROPP – has now also expanded its remit to included primary- research papers. Other, similar spin-off journals in different topic areas will be launched over the next few years, giving IOP Publishing what it hopes is a series of journals to match the best in the world.

For Sachdev, publishing with ROPP is all about having “the stamp of approval” from the academic community. “So if you think your field is now reached a point where a scholarly assessment of recent advances is called for, then please consider ROPP,” he says. “We have a very strong editorial board to help you produce a high-quality, impactful article, now with the option of open access and publishing really high-quality primary research papers too.”

The post Flagship journal <em>Reports on Progress in Physics</em> marks 90th anniversary with two-day celebration appeared first on Physics World.

]]>
Analysis A new future lies in store for Reports on Progress in Physics as the journal turns 90 https://physicsworld.com/wp-content/uploads/2024/09/superconductor-cube-levitiating-1663588811-iStock_koto_feja.jpg newsletter
Quantum growth drives investment in diverse skillsets https://physicsworld.com/a/quantum-growth-drives-investment-in-diverse-skillsets/ Tue, 10 Sep 2024 13:20:05 +0000 https://physicsworld.com/?p=116590 Scientific equipment makers are building a diverse workforce to feed into expanding markets in quantum technologies and low-temperature materials measurement

The post Quantum growth drives investment in diverse skillsets appeared first on Physics World.

]]>
The meteoric rise of quantum technologies from research curiosity to commercial reality is creating all the right conditions for a future skills shortage, while the ongoing pursuit of novel materials continues to drive demand for specialist scientists and engineers. Within the quantum sector alone, headline figures from McKinsey & Company suggest that less than half of available quantum jobs will be filled by 2025, with global demand being driven by the burgeoning start-up sector as well as enterprise firms that are assembling their own teams to explore the potential of quantum technologies for transforming their businesses.

While such topline numbers focus on the expertise that will be needed to design, build and operate quantum systems, a myriad of other skilled professionals will be needed to enable the quantum sector to grow and thrive. One case in point is the diverse workforce of systems engineers, measurement scientists, service engineers and maintenance technicians who will be tasked with building and installing the highly specialized equipment and instrumentation that is needed to operate and monitor quantum systems.

“Quantum is an incredibly exciting space right now, and we need to prepare for the time when it really takes off and explodes,” says Matt Martin, Managing Director of Oxford Instruments NanoScience, a UK-based company that manufactures high-performance cryogenics systems and superconducting magnets. “But for equipment makers like us the challenge is not just about quantum, since we are also seeing increased demand from both academia and industry for emerging applications in scientific measurement and condensed-matter physics.”

Martin points out that Oxford Instruments already works hard to identify and nurture new talent. Within the UK the company has for many years sponsored doctoral students to foster a deeper understanding of physics in the ultracold regime, and it also offers placements to undergraduates to spark an early interest in the technology space. The firm is also dialled into the country’s apprenticeship scheme, which offers an effective way to train young people in the engineering skills needed to manufacture and maintain complex scientific instruments.

Despite these initiatives, Martin acknowledges that NanoScience faces the same challenges as other organizations when it comes to recruiting high-calibre technical talent. In the past, he says, a skilled scientist would have been involved in all stages of the development process, but now the complexity of the systems and depth of focus required to drive innovation across multiple areas of science and engineering has led to the need for greater specialization. While collaboration with partners and sister companies can help, the onus remains on each business to develop a core multidisciplinary team.

Building ultracold and measurement expertise

The key challenge for companies like Oxford Nanoscience is finding physicists and engineers who can create the ultracold environments that are needed to study both quantum behaviour and the properties of novel materials. Compounding that issue is the growing trend towards providing the scientific community with more automated solutions, which has made it much easier for researchers to configure and conduct experiments at ultralow temperatures.

Harriet van der Vliet

“In the past PhD students might have spent a significant amount of time building their experiments and the hardware needed for their measurements,” explains Martin. “With today’s push-button solutions they can focus more on the science, but that changes their knowledge because there’s no need for them to understand what’s inside the box. Today’s measurement scientists are increasingly skilled in Python and integration, but perhaps less so in hardware.”

Developing such comprehensive solutions demands a broader range of technical specialists, such as software programmers and systems engineers, that are in short supply across all technology-focused industries. With many other enticing sectors vying for their attention, such as the green economy, energy and life sciences, and the rise of AI-enabled robotics, Martin understands the importance of inspiring young people to devote their energies to the technologies that underpin the quantum ecosystem. “We’ve got to be able to tell our story, to show why this new and emerging market is so exciting,” he says. “We want them to know that they could be part of something that will transform the future.”

To raise that awareness Oxford Instruments has been working to establish a series of applications centres in Japan, the US and the UK. One focus for the centres will be to provide training that helps users to get to the most out of the company’s instruments, particularly for those without direct experience of building and configuring an ultracold system. But another key objective is to expose university-level students to research-grade technology, which in turn should help to highlight future career options within the instrumentation sector.

To build on this initiative Oxford Instruments is now actively discussing opportunities to collaborate with other companies on skills development and training in the US. “We all want to provide some hands-on learning for students as they progress through their university education, and we all want to find ways to work with government programmes to stimulate this training,” says Martin. “It’s better for us to work together to deliver something more substantial rather than doing things in a piecemeal way.”

That collaboration is likely to centre around an initiative launched by US firm Quantum Design back in 2015. Under the scheme, now badged Discovery Teaching Labs, the company has donated one of its commercial systems for low-temperature material analysis, the PPMS VersaLab, to several university departments in the US. As part of the initiative the course professors are also asked to create experimental modules that enable undergraduate students to use this state-of-the-art technology to explore key concepts in condensed-matter physics.

“Our initial goal was to partner with universities to develop a teaching curriculum that uses hands-on learning to inspire students to become more interested in physics,” says Quantum Design’s Barak Green, who has been a passionate advocate for the scheme. “By enabling students to become confident with using these advanced scientific instruments, we have also found that we have equipped them with vital technical skills that can open up new career paths for them.”

One of the most successful partnerships has been with California State University San Marcos (CSUSM), a small college that mainly attracts students from communities with no prior tradition of pursuing a university education. “There is no way that the students at CSUSM would have been able to access to this type of equipment in their undergraduate training, but now they have a year-long experimental programme that enhances their scientific learning and makes them much more comfortable with using such an advanced system,” says Green. “Many of these students can’t afford to stay in school to study for a PhD, and this programme has given them the knowledge and experience they need to get a good job.”

California State University San Marcos (CSUSM)

Indeed, Quantum Design has already hired around 20 students from CSUSM and other local programmes. “We didn’t start the initiative with that in mind, but over the years we discovered that we had all these highly skilled people who could come and work for us,” Green continues. “Students who only do theory are often very nervous around these machines, but the CSUSM graduates bring a whole extra layer of experience and know-how. Not everyone needs to have a PhD in quantum physics, we also need people who can go into the workforce and build the systems that the scientists rely on.”

This overwhelming success has given greater impetus to the programme, with Quantum Design now seeking to bring in other partners to extend its reach and impact. LakeShore Cryotronics, a long-time collaborator that designs and builds low-temperature measurement systems that can be integrated into the VersaLab, was the first company to make the commitment. In 2023 the US-based firm donated one of its M91 FastHall measurement platforms to join the VersaLab already installed at CSUSM, and the two partners are now working together to establish an undergraduate teaching lab at Stony Brook University in New York.

“It’s an opportunity for like-minded scientific companies to give something back to the community, since most of our products are not affordable for undergraduate programmes,” says LakeShore’s Chuck Cimino, who has now joined the board of advisors for the Discovery Teaching Labs programme. “Putting world-class equipment into the hands of students can influence their decisions to continue in the field, and in the long term will help to build a future workforce of skilled scientists and engineers.”

Conversations with other equipment makers at the 2024 APS March Meeting also generated significant interest, potentially paving the way for Oxford Instruments to join the scheme. “It’s a great model to build on, and we are now working to see how we might be able to commit some of our instruments to those training centres,” says Martin, who points out that the company’s Proteox S platform offers the ideal entry-level system for teaching students how to manage a cold space for experiments with qubits and condensed-matter systems. “We’ve developed a lot of training on the hardware and the physicality of how the systems work, and in that spirit of sharing there’s lots of useful things we could do.”

While those discussions continue, Martin is also looking to a future when quantum-powered processors become a practical reality in compute-intensive settings such as data centres. “At that point there will be huge demand for ultracold systems that are capable of hosting and operating large-scale quantum computers, and we will suddenly need lots of people who can install and service those sorts of systems,” he says. “We are already thinking about ways to set up training centres to develop that future workforce, which will primarily be focused around service engineers and maintenance technicians.”

Martin believes that partnering with government labs could offer a solution, particularly in the US where various initiatives are already in place to teach technical skills to college-level students. “It’s about taking that forward view,” he says. “We have already built a product that can be used for training purposes, and we have started discussions with US government agencies to explore how we could work together to build the workforce that will be needed to support the big industrial players.”

The post Quantum growth drives investment in diverse skillsets appeared first on Physics World.

]]>
Analysis Scientific equipment makers are building a diverse workforce to feed into expanding markets in quantum technologies and low-temperature materials measurement https://physicsworld.com/wp-content/uploads/2024/09/web-Matt-Martin-headshot.jpg newsletter
Quantum brainwave: using wearable quantum technology to study cognitive development https://physicsworld.com/a/quantum-brainwave-using-wearable-quantum-technology-to-study-cognitive-development/ Tue, 10 Sep 2024 10:00:04 +0000 https://physicsworld.com/?p=116530 Margot Taylor and David Woolger explain to Physics World why quantum-sensing technology is a game-changer for studying children’s brains

The post Quantum brainwave: using wearable quantum technology to study cognitive development appeared first on Physics World.

]]>
Though she isn’t a physicist or an engineer, Margot Taylor has spent much of her career studying electrical circuits. As the director of functional neuroimaging at the Hospital for Sick Children in Toronto, Canada, Taylor has dedicated her research to the most complex electrochemical device on the planet – the human brain.

Taylor uses various brain imaging techniques including MRI to understand cognitive development in children. One of her current projects uses a novel quantum sensing technology to map electrical brain activity. Magnetoencephalography with optically pumped magnetometry (OPM-MEG) is a wearable technology that uses quantum spins to localize electrical impulses coming from different regions of the brain.

Physics World’s Hamish Johnston caught up with Taylor to discover why OPM-MEG could be a breakthrough for studying children, and how she’s using it to understand the differences between autistic and non-autistic people.

The OPM-MEG helmets Taylor uses in this research were developed by Cerca Magnetics, a company founded in 2020 as a spin-out from the University of Nottingham‘s Sir Peter Mansfield Imaging Centre in the UK. Johnston also spoke to Cerca’s chief executive David Woolger, who explained how the technology works and what other applications they are developing.

Margot Taylor: understanding the brain

What is magnetoencephalography, and how is it used in medicine?

Magnetoencephalography (MEG) is the most sensitive non-invasive means we have of assessing brain function. Specifically, the technique gives us information about electrical activity in the brain. It doesn’t give us any information about the structure of the brain, but the disorders that I’m interested in are disorders of brain function, rather than disorders of brain structure. There are some other techniques, but MEG gives us amazing temporal and spatial resolution, which makes it very valuable.

So you’re measuring electrical signals. Does that mean that the brain is essentially an electrical device?

Indeed, they are hugely complex, electrical devices. Technically it’s electrochemical, but we are measuring the electrical signals that are the product of the electrochemical reactions in the brain.

When you perform MEG, how do you know where that signal’s coming from?

We usually get a structural MRI as well, and then we have very good source localization approaches so that we can tell exactly where in the brain different signals are coming from. We can also get information about how the signals are connecting with each other, the interactions among different brain regions, and the timing of those interactions.

Three complex-looking helmets on shelves next to a fun child-friendly mural

Why does quantum MEG make it easier to do brain scans on children?

The quantum technology is called optically pumped magnetometry (OPM) and it’s a wearable system, where the sensors are placed in a helmet. This means there is allowed movement because the helmet moves with the child. We’re able to record brain signals in very young children because they can move or sit on their parents’ laps, they don’t have to be lying perfectly still.

Conventional MEG uses cryogenic technology and is typically one size fits all. It’s designed for an adult male head and if you put in a small child, their head is a long way from the sensors. With OPM, however, the helmet can be adapted for different sized heads. We have little tiny helmets up to bigger helmets. This is a game changer in terms of recording signals in little children.

Can you tell us more about the study you’re leading at the Hospital for Sick Children in Toronto using a quantum MEG system from the UK’s Cerca Magnetics?

We are looking at early brain function in autistic and non-autistic children. Autism is usually diagnosed by about three years of age, although sometimes it’s not diagnosed until they’re older. But if a child could be diagnosed with autism earlier, then interventions could start earlier. And so we’re looking at autistic and non-autistic children as well as children that have a high likelihood of being autistic to see if we can get brain signals that will predict whether they will go on to get a diagnosis or not.

How do the responses you measure using quantum MEG differ between autistic and non-autistic people, or those with a high likelihood of developing autism?

We don’t have that data yet because we’re looking at the children who have a high likelihood of being autistic, so we have to wait until they grow up and for another year or so to see if they get a diagnosis. For the children who do have a diagnosis of autism already, it seems like the responses are atypical, but we haven’t fully analysed that data. We think that there is a signal there that we’ll be able to report in the foreseeable future, but we have only tested 32 autistic children so far, and we’d like to get more data before we publish.

A woman sits with her back to the camera wearing a helmet covered with electronics. Two more women stand either side

Do you have any preliminary results or published papers based on this data yet?

We’re still analysing data. We’re seeing beautiful, age-related changes in our cohort of non-autistic children. Because nobody has been able to do these studies before, we have to establish the foundational datasets with non-autistic children before we can compare it to autistic children or children who have a high likelihood of being autistic. And those will be published very shortly.

Are you using the quantum MEG system for anything else at the moment?

With the OPM system, we’re also setting up studies looking at children with epilepsy. We want to compare the OPM technology with the cryogenic MEG and other imaging technologies and we’re working with our colleagues to do that. We’re also looking at children who have a known genetic disorder to see if they have brain signals that predict whether they will also go on to experience a neurodevelopmental disorder. We’re also looking at children who are born to mothers with HIV to see if we can get an indication of what is happening in their brains that may affect their later development.

David Woolger: expanding applications

Can you give us a brief description of Cerca Magnetics’ technology and how it works?

When a neuron fires, you get an electrical current and a corresponding magnetic field. Our technology uses optically pumped magnetometers (OPMs), which are very sensitive to magnetic fields. Effectively, we’re sensing magnetic fields 500 million times lower than the Earth’s magnetic field.

To enable us to do that, as well as the quantum sensors, we need to shield against the Earth’s magnetic field, so we do this in a shielded environment with both active and passive shielding. We are then able to measure the magnetic fields from the brain, which we can use to understand functionally what’s going on in that area.

Are there any other applications for this technology beyond your work with Margot Taylor?

There’s a vast number of applications within the field of brain health. For example, we’re working with a team in Oxford at the moment, looking at dementia. So that’s at the other end of the life cycle, studying ways to identify the disease much earlier. If you can do that you can potentially start treatment with drugs or other interventions earlier.

Outside brain health, there are a number of groups who are using this quantum technology in other areas of medical science. One group in Arkansas is looking at foetal imaging during pregnancy, using it to see much more clearly than has previously been possible.

There’s another group in London looking at spinal imaging using OPM. Concussion is another potential application of these sensors, for sports or military injuries. There’s a vast range of medical-imaging applications that can be done with these sensors.

Have you looked at non-medical applications?

Cerca is very much a medical-imaging company, but I am aware of other applications of the technology. For example, applications with car batteries have potential to be a big market. When they make car batteries, there’s a lot of electrochemistry that goes into the cells. If you can image those processes during production, you can effectively optimize that production cycle, and therefore reduce the costs of the batteries. This has a real potential benefit for use in electric cars.

What’s next for Cerca Magnetics’ technology?

We are in a good position in that we’ve been able to deliver our initial systems to the research market and actually earn revenue. We have made a profit every year since we started trading. We have then reinvested that profit back into further development. For example, we are looking at scanning two people at once, looking at other techniques that will continue to develop the product, and most importantly, working on medical device approval. At the moment, our system is only sold to research institutes, but we believe that if the product were available in every hospital and every doctor’s surgery, it could have an incredible societal impact across the human lifespan.

Magnetoencephalography with optically pumped magnetometers

Schematic showing the working principle behind optically pumped magnetometry

Like any electrical current, signals transmitted by neurons in the brain generate magnetic fields. Magnetoencephalography (MEG) is an imaging technique that detects these signals and locates them in the brain. MEG has been used to plan brain surgery to treat epilepsy. It is also being developed as a diagnostic tool for disorders including schizophrenia and Alzheimer’s disease.

MEG traditionally uses superconducting quantum interference devices (SQUIDs), which are sensitive to very small magnetic fields. However, SQUIDs must be cryogenically cooled, which makes the technology bulky and immobile. Magnetoencephalography with optically pumped magnetometers (OPM-MEG) is an alternative technology that operates at room temperature. Optically pumped magnetometers (OPMs) are small quantum devices that can be integrated into a helmet, which is an advantage for imaging children’s brains.

The key components of an OPM device are a cloud of alkali atoms (generally rubidium), a laser and a photodetector. Initially, the spins of the atoms point in random directions (top row in figure), but applying a polarized laser of the correct frequency aligns the spins along the direction of the light (middle row in figure). When the atoms are in this state, they are transparent to the laser so the signal reaching the photodetector is at a maximum.

However, in the presence of a magnetic field, such as that from a brain wave, the spins of the atoms are perturbed and they are no longer aligned with the laser (bottom row in figure). The atoms can now absorb some of the laser light, which reduces the signal reaching the photodetector.

In OPM-MEG, these devices are placed around the patient’s head and integrated into a helmet. By measuring the signal from the devices and combining this with structural images and computer modelling, it’s possible to work out where in the brain the signal came from. This can be used to understand how electrical activity in different brain regions is linked to development, brain disorders and neurodivergence.

Katherine Skipper

The post Quantum brainwave: using wearable quantum technology to study cognitive development appeared first on Physics World.

]]>
Interview Margot Taylor and David Woolger explain to Physics World why quantum-sensing technology is a game-changer for studying children’s brains https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Cerca-frontis.jpg 1
Electro-active material ‘learns’ to play Pong https://physicsworld.com/a/electro-active-material-learns-to-play-pong/ Tue, 10 Sep 2024 08:45:39 +0000 https://physicsworld.com/?p=116611 Memory-like behaviour emerges in a polymer gel

The post Electro-active material ‘learns’ to play Pong appeared first on Physics World.

]]>
An electro-active polymer hydrogel can be made to “memorize” experiences in the same way as biological neurons do, say researchers at the University of Reading, UK. The team demonstrated this finding by showing that when the hydrogel is configured to play the classic video game Pong, it improves its performance over time. While it would be simplistic to say that the hydrogel truly learns like humans and other sentient beings, the researchers say their study has implications for studies of artificial neural networks. It also raises questions about how “simple” such a system can actually be, if it is capable of such complex behaviour.

Artificial neural networks are machine-learning algorithms that are configured to mimic structures found in biological neural networks (BNNs) such as human brains. While these forms of artificial intelligence (AI) can solve problems through trial and error without being explicitly programmed with pre-defined rules, they are not generally regarded as being adaptive, as BNNs are.

Playing Pong with neurons

In a previous study, researchers led by neuroscientist Karl Friston of University College London, UK and Brett Kagan of Cortical Labs in Melbourne, Australia, integrated a BNN with computing hardware by growing a large cluster of human neurons on a silicon chip. They then connected this chip to a computer programmed to play a version of Pong, a table-tennis-like game that originally involved a player and the computer bouncing an electronic ball between two computerized paddles. In this case, however, the researchers simplified the game so that there was only a single paddle on one side of the virtual table.

To find out whether this paddle had contacted the ball, Friston, Kagan and colleagues transmitted electrical signals to the neuronal network via the chip. At first, the neurons did not play Pong very well, but over time, they hit the ball more frequently and made more consecutive hits, allowing for longer rallies.

In this earlier work, the researchers described the neurons as being able to “learn” the game thanks to the concept of free energy as defined by Friston in 2010. He argued that neurons endeavour to minimize free energy, and therefore “choose” the option that allows them to do this most efficiently.

An even simpler version

Inspired by this feat and by the technique employed, the Reading researchers wondered whether such an emergent memory function could be generated in media that were even simpler than neurons. For their experiments, they chose to study a hydrogel (a complex polymer that jellifies when hydrated) that contains free-floating ions. These ions make the polymer electroactive, meaning that its behaviour is influenced by an applied electric field. As the ions move, they draw water with them, causing the gel to swell in the area where the electric field is applied.

The time it takes for the hydrogel to swell is much greater than the time it takes to de-swell, explains team member Vincent Strong. “This means there is a form of hysteresis in the ion motion because each consecutive stimulation moves the ions less and less as they gather,” says Strong, a robotics engineer at Reading and the first author of a paper in Cell Reports Physical Science on the new research. “This acts as a form of memory since the result of each stimulation on the ion’s motion is directly influenced by previous stimulations and ion motion.”

This form of memory allows the hydrogel to build up experience about how the ball moves in Pong, and thus to move its paddle with greater accuracy, he tells Physics World. “The ions within the gel move in a way that maps a memory of the ball’s motion not just at any given point in time but over the course of the entire game.”

The researchers argue that their hydrogel represents a different type of “intelligence”, and one that could be used to develop algorithms that are simpler than existing AI algorithms, most of which are derived from neural networks.

“We see this work as an example of how a much simpler system, in the form of an electro-active polymer hydrogel, can perform similar complex tasks to biological neural networks,” Strong says. “We hope to apply this as a stepping stone to finding the minimum system required for such tasks that require memory and improvement over time, looking both into other active materials and tasks that could provide further insight.

“We’ve shown that memory is emergent within the hydrogels, but the next step is to see whether we can also show specifically that learning is occurring.”

The post Electro-active material ‘learns’ to play Pong appeared first on Physics World.

]]>
Research update Memory-like behaviour emerges in a polymer gel https://physicsworld.com/wp-content/uploads/2024/09/hydrogel-pong.jpg
Fusion’s public-relations drive is obscuring the challenges that lie ahead https://physicsworld.com/a/fusions-public-relations-drive-is-obscuring-the-challenges-that-lie-ahead/ Mon, 09 Sep 2024 10:00:40 +0000 https://physicsworld.com/?p=116472 Guy Matthews says that the focus on public relations is masking the challenges of commercializing nuclear fusion

The post Fusion’s public-relations drive is obscuring the challenges that lie ahead appeared first on Physics World.

]]>
“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.” So stated the Nobel laureate Richard Feynman during a commission hearing into NASA’s Challenger space shuttle disaster in 1986, which killed all seven astronauts onboard.

Those famous words have since been applied to many technologies, but they are becoming especially apt to nuclear fusion where public relations currently appears to have the upper hand. Fusion has recently been successful in attracting public and private investment and, with help from the private sector, it is claimed that fusion power can be delivered in time to tackle climate change in the coming decades.

Yet this rosy picture hides the complexity of the novel nuclear technology and plasma physics involved. As John Evans – a physicist who has worked at the Atomic Energy Research Establishment in Harwell, UK – recently highlighted in Physics World, there is a lack of proven solutions for the fusion fuel cycle, which involves breeding and reprocessing unprecedented quantities of radioactive tritium with extremely low emissions.

Unfortunately, this is just the tip of the iceberg. Another stubborn roadblock lies in instabilities in the plasma itself – for example, so-called Edge Localised Modes (ELMs), which originate in the outer regions of tokamak plasmas and are akin to solar flares. If not strongly suppressed they could vaporize areas of the tokamak wall, causing fusion reactions to fizzle out. ELMs can also trigger larger plasma instabilities, known as disruptions, that can rapidly dump the entire plasma energy and apply huge electromagnetic forces that could be catastrophic for the walls of a fusion power plant.

In a fusion power plant, the total thermal energy stored in the plasma needs to be about 50 times greater than that achieved in the world’s largest machine, the Joint European Torus (JET). JET operated at the Culham Centre for Fusion Energy in Oxfordshire, UK, until it was shut down in late 2023. I was responsible for upgrading JET’s wall to tungsten/beryllium and subsequently chaired the wall protection expert group.

JET was an extremely impressive device, and just before it ceased operation it set a new world record for controlled fusion energy production of 69 MJ. While this was a scientific and technical tour de force, in absolute terms the fusion energy created and plasma duration achieved at JET were minuscule. A power plant with a sustained fusion power of 1 GW would produce 86 million MJ of fusion energy every day. Furthermore, large ELMs and disruptions were a routine feature of JET’s operation and occasionally caused local melting. Such behaviour would render a power plant inoperable, yet these instabilities remain to be reliably tamed.

Complex issues

Fusion is complex – solutions to one problem often exacerbate other problems. Furthermore, many of the physics and technology features that are essential for fusion power plants and require substantial development and testing in a fusion environment were not present in JET. One example being the technology to drive the plasma current sustainably using microwaves. The purpose of the international ITER project, which is currently being built in Cadarache, France, is to address such issues.

ITER, which is modelled on JET, is a “low duty cycle” physics and engineering experiment. Delays and cost increases are the norm for large nuclear projects and ITER is no exception. It is now expected to start scientific operation in 2034, but the first experiments using “burning” fusion fuel – a mixture of deuterium and tritium (D–T) – is only set to begin in 2039. ITER, which is equipped with many plasma diagnostics that would not be feasible in a power plant, will carry out an extensive research programme that includes testing tritium-breeding technologies on a small scale, ELM suppression using resonant magnetic perturbation coils and plasma-disruption mitigation systems.

The challenges ahead cannot be understated. For fusion to become commercially viable with an acceptably low output of nuclear waste, several generations of power-plant-sized devices could be needed

Yet the challenges ahead cannot be understated. For fusion to become commercially viable with an acceptably low output of nuclear waste, several generations of power-plant-sized devices could be needed following any successful first demonstration of substantial fusion-energy production. Indeed, EUROfusion’s Research Roadmap, which the UK co-authored when it was still part of ITER, sees fusion as only making a significant contribution to global energy production in the course of the 22nd century. This may be politically unpalatable, but it is a realistic conclusion.

The current UK strategy is to construct a fusion power plant – the Spherical Tokamak for Energy Production (STEP) – at West Burton, Nottinghamshire, by 2040 without awaiting results from intermediate experiments such as ITER. This strategy would appear to be a consequence of post-Brexit politics. However, it looks unrealistic scientifically, technically and economically. The total thermal energy of the STEP plasma needs to be about 5000 times greater than has so far been achieved in the UK’s MAST-U spherical tokamak experiment. This will entail an extreme, and unprecedented, extrapolation in physics and technology. Furthermore, the compact STEP geometry means that during plasma disruptions its walls would be exposed to far higher energy loads than ITER, where the wall protection systems are already approaching physical limits.

I expect that the complexity inherent in fusion will continue to provide its advocates, both in the public and private sphere, with ample means to obscure both the severity of the many issues that lie ahead and the timescales required. Returning to Feynman’s remarks, sooner or later reality will catch up with the public relations narrative that currently surrounds fusion. Nature cannot be fooled.

The post Fusion’s public-relations drive is obscuring the challenges that lie ahead appeared first on Physics World.

]]>
Opinion and reviews Guy Matthews says that the focus on public relations is masking the challenges of commercializing nuclear fusion https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Forum-fusion-ITER.jpg newsletter
To make Mars warmer, just add nanorods https://physicsworld.com/a/to-make-mars-warmer-just-add-nanorods/ Mon, 09 Sep 2024 08:00:38 +0000 https://physicsworld.com/?p=116609 Releasing engineered nanoparticles into the Martian atmosphere could warm the planet by over 30 K

The post To make Mars warmer, just add nanorods appeared first on Physics World.

]]>
If humans released enough engineered nanoparticles into the atmosphere of Mars, the planet could become more than 30 K warmer – enough to support some forms of microbial life. This finding is based on theoretical calculations by researchers in the US, and it suggests that “terraforming” Mars to support temperatures that allow for liquid water may not be as difficult as previously thought.

“Our finding represents a significant leap forward in our ability to modify the Martian environment,” says team member Edwin Kite, a planetary scientist at the University of Chicago.

Today, Mars is far too cold for life as we know it to thrive there. But it may not have always been this way. Indeed, streams may have flowed on the red planet as recently as 600 000 years ago. The idea of returning Mars to this former, warmer state – terraforming – has long kindled imaginations, and scientists have proposed several ways of doing it.

One possibility would be to increase the levels of artificial greenhouse gases, such as chlorofluorocarbons, in Mars’ currently thin atmosphere. However, this would require volatilizing roughly 100 000 megatons of fluorine, an element that is scarce on the red planet’s surface. This means that essentially all the fluorine required would need to be transported to Mars from somewhere else – something that is not really feasible.

An alternative would be to use materials already present on Mars’ surface, such as those in aerosolized dust. Natural Martian dust is mainly made of iron-rich minerals distributed in particles roughly 1.5 microns in radius, which are easily lofted to altitudes of 60 km and more. In its current form, this dust actually lowers daytime surface temperatures by attenuating infrared solar radiation. A modified form of dust might, however, experience different interactions. Could this modified dust make the planet warmer?

Nanoparticles designed to trap escaping heat and scatter sunlight

In a proof-of-concept study, Kite and colleagues at the University of Chicago, the University of Central Florida and Northwestern University analysed the atmospheric effects of nanoparticles shaped like short rods about nine microns long, which is about the same size as commercially available glitter. These particles have an aspect ratio of around 60:1, and Kite says they could be made from readily-available Martian materials such as iron or aluminium.

Calculations using finite-difference time domains showed that such nanorods, which are randomly oriented due to Brownian motion, would strongly scatter and absorb upwelling thermal infrared radiation in certain spectral windows. The nanorods would also scatter sunlight down towards the surface, adding to the warming, and would settle out of the atmosphere and onto the Martian surface more than 10 times more slowly than natural dust. This implies that, once airborne, the nanorods would be lofted to high altitudes and remain in the atmosphere for long periods.

More efficient than previous Martian warming proposals

These factors give the nanorod idea several advantages over comparable schemes, Kite says. “Our approach is over 5000 times more efficient than previous global warming proposals (on a per-unit-mass-in-the-atmosphere basis) because it uses much less mass of material to achieve significant warming,” he tells Physics World. “Previous schemes required importing large amounts of gases from Earth or mining rare Martian resources, [but] we find that nanoparticles can achieve similar warming with a much smaller total mass.”

However, Kite stresses that the comparison only applies to approaches that aim to warm Mars’ atmosphere on a global scale. Other approaches, including one developed by researchers at Harvard University and NASA’s Jet Propulsion Laboratory (JPL) that uses silica aerogels, would be better suited for warming the atmosphere locally, he says, adding that a recent workshop on Mars terraforming provides additional context.

While the team’s research is theoretical, Kite believes it opens new avenues for exploring planetary climate modification. It could inform future Mars exploration or even long-term plans for making Mars more habitable for microbes and plants. Extensive further research would be required, however, before any practical efforts in this direction could see the light of day. In particular, more work is needed to assess the very long-term sustainability of a warmed Mars. “Atmospheric escape to space would take at least 300 million years to deplete the atmosphere at the present-day rate,” he observes. “And nanoparticle warming, by itself, is not sufficient to make the planet’s surface habitable again either.”

Kite and colleagues are now studying the effects of particles of different shapes and compositions, including very small carbon nanoparticles such as graphene nanodisks. They report their present work in Science Advances.

The post To make Mars warmer, just add nanorods appeared first on Physics World.

]]>
Research update Releasing engineered nanoparticles into the Martian atmosphere could warm the planet by over 30 K https://physicsworld.com/wp-content/uploads/2024/09/Mars.jpg newsletter1
Taking the leap – how to prepare for your future in the quantum workforce https://physicsworld.com/a/taking-the-leap-how-to-prepare-for-your-future-in-the-quantum-workforce/ Fri, 06 Sep 2024 15:16:22 +0000 https://physicsworld.com/?p=116506 Katherine Skipper and Tushna Commissariat interview three experts in the quantum arena, to get their advice on careers in the quantum market

The post Taking the leap – how to prepare for your future in the quantum workforce appeared first on Physics World.

]]>
It’s official: after endorsement from 57 countries and the support of international physics societies, the United Nations has officially declared that 2025 is the International Year of Quantum Science and Technology (IYQ).

The year has been chosen as it marks the centenary of Werner Heisenberg laying out the foundations of quantum mechanics – a discovery that would earn him the Nobel Prize for Physics in 1932. As well as marking one of the most significant breakthroughs in modern science, the IYQ also reflects the recent quantum renaissance. Applications that use the quantum properties of matter are transforming the way we obtain, process and transmit information, and physics graduates are uniquely positioned to make their mark on the industry.

It’s certainly big business these days. According to estimates from McKinsey, in 2023 global quantum investments were valued at $42bn. Whether you want to build a quantum computer, an unbreakable encryption algorithm or a high-precision microscope, the sector is full of exciting opportunities. With so much going on, however, it can be hard to make the right choices for your career.

To make the quantum landscape easier to navigate as a jobseeker, Physics World has spoken to Abbie Bray, Araceli Venegas-Gomez and Mark Elo – three experts in the quantum sector, from academia and industry. They give us their exclusive perspectives and advice on the future of the quantum marketplace; job interviews; choosing the right PhD programme; and managing risk and reward in this emerging industry.

Quantum going mainstream: Abbie Bray

According to Abbie Bray, lecturer in quantum technologies at University College London (UCL) in the UK, the second quantum revolution has broadened opportunities for graduates. Until recently, there was only one way to work in the quantum sector – by completing a PhD followed by a job in academia. Now, however, more and more graduates are pursuing research in industry, where established companies such as Google, Microsoft and BT – as well as numerous start-ups like Rigetti and Universal Quantum – are racing to commercialize the technology.

Abbie Bray

While a PhD is generally needed for research, Bray is seeing more jobs for bachelor’s and master’s graduates as quantum goes mainstream. “If you’re an undergrad who’s loving quantum but maybe not loving the research or some of the really high technical skills, there’s other ways to still participate within the quantum sphere,” says Bray. With so many career options in industry, government, consulting or teaching, Bray is keen to encourage physics graduates to consider these as well as a more traditional academic route.

She adds that it’s important to have physicists involved in all parts of the industry. “If you’re having people create policies who maybe haven’t quite understood the principles or impact or the effort and time that goes into research collaboration, then you’re lacking that real understanding of the fundamentals. You can’t have that right now because it’s a complex science, but it’s a complex science that is impacting society.”

So whether you’re a PhD student or an undergraduate, there are pathways into the quantum sector, but how can you make yourself stand out from the crowd? Bray has noticed that quantum physics is not taught in the same way across universities, with some students getting more exposure to the practical applications of the field than others. If you find yourself in an environment that isn’t saturated with quantum technology, don’t panic – but do consider getting additional experience outside your course. Bray highlights PennyLane, which is a Python library for programming quantum computers, that also produces learning resources.

Consider your options

Something else to be aware of, particularly for those contemplating a PhD, is that “quantum technologies” is a broad umbrella term, and while there is some crossover between, say, sensing and computing, switching between disciplines can be a challenge. It’s therefore important to consider all your options before committing to a project and Bray thinks that Centres for Doctoral Training (CDTs) are a step in the right direction. UCL has recently launched a quantum computing and quantum communications CDT where students will undergo a six-month training period before writing their project proposal. She thinks this enables them to get the most out of their research, particularly if they haven’t covered some topics in their undergraduate degree. “It’s very important that during a PhD you do the research that you want to do,” Bray says.

When it comes to securing a job, PhD position or postdoc, non-technical skills can be just as valuable as quantum know-how. Bray says it’s important to demonstrate that you’re passionate and deeply knowledgeable about your favourite quantum topic, but graduates also need to be flexible and able to work in an interdisciplinary team. “If you think you’re a theorist, understand that it also does sometimes mean looking at and working with experimental data and computation. And if you’re an experimentalist, you’ve got to understand that you need to have a rigorous understanding of the theory before you can make any judgements on your experimentation.” As Bray summarises: “theorists and experimentalists need to move at the same time”.

The ability to communicate technical concepts effectively is also vital. You might need to pitch to potential investors, apply for grants or even communicate with the HR department so that they shortlist the best candidates. Bray adds that in her experience, physicists are conditioned to communicate their research very directly, which can be detrimental in interviews where panels want to hear narratives about how certain skills were demonstrated. “They want to know how you identified a situation, then you identified the action, then the resolution. I think that’s something that every single student, every single person right now should focus on developing.”

The quantum industry is still finding its feet and earlier this year it was reported that investment has fallen by 50% since a high in 2022. However, Bray argues that “if there has been a de-investment, there’s still plenty of money to go around” and she thinks that even if some quantum technologies don’t pan out, the sector will continue to provide valuable skills for graduates. “No matter what you do in quantum, there are certain skills and experiences that can cross over into other parts of tech, other parts of science, other parts of business.”

In addition, quantum research is advancing everything from software to materials science and Bray thinks this could kick-start completely new fields of research and technology. “In any race, there are horses that will not cross the finish line, but they might run off and cross some other finish line that we didn’t know existed,” she says.

Building the quantum workforce: Araceli Venegas-Gomez

While working in industry as an aerospace engineer, Araceli Venegas-Gomez was looking for a new challenge and decided to pursue her passion for physics, getting her master’s degree in medical physics alongside her other duties. Upon completing that degree in 2016, she decided to take on a second master’s followed by a PhD in quantum optics and simulation at the University of Strathclyde, UK. By the time the COVID-19 pandemic hit in 2020, she had defended her thesis, registered her company, and joined the University of Bristol Quantum Technology Enterprise Centre as an executive fellow.

Araceli Venegas-Gomez

It was during her studies at Strathclyde that Venegas-Gomez decided to use her vast experience across industry and academia, as well as her quantum knowledge. Thanks to a fellowship from the Optica Foundation, she was able to launch QURECA (Quantum Resources and Careers). Today, it’s a global company that helps to train and recruit individuals, while also providing business development advice for for both individuals and companies in the quantum sphere. As founder and chief executive of the firm, her aims were to link the different stakeholders in the quantum ecosystem and to raise the quantum awareness of the general public. Crucially, she also wanted to ease the skills bottleneck in the quantum workforce and to bring newcomers into the quantum ecosystem.

As Venegas-Gomez points out, there is a significant scarcity of skilled quantum professionals for the many roles that need filling. This shortage is exacerbated by the competition between academia and industry for the same pool of talent. “Five or ten years ago, it was difficult enough to find graduate students who would like to pursue a career in quantum science, and that was just in academia,” explains Venegas-Gomez. “With the quantum market booming, industry is also looking to hire from the same pool of candidates, so you have more competition, for pretty much the same number of people.”

Slow progress

Venegas-Gomez highlights that the quantum arena is very broad. “You can have a career in research, or work in industry, but there are so many different quantum technologies that are coming onto the market, at different stages of development. You can work on software or hardware or engineering; you can do communications; you can work on developing the business side; or perhaps even in patent law.” While some of these jobs are highly technical and would require a master’s or a PhD in that specific area of quantum tech, there are plenty of roles that would accept graduates with only an MSc in physics or even a more interdisciplinary experience. “If you have a background in physics and business, everyone is looking for you,” she adds.

From what she sees in the quantum recruitment market today, there is no job shortage for physicists – instead there is a dearth of physicists with the right skills for a specific role. Venegas-Gomez explains that graduates with a physics degree in many fields have transferable skills that allow them to work in “absolutely any sector that you could imagine”. But depending on the specific area of academia or industry within the quantum marketplace that you might be interested in, you will likely require some specific competences.

As Bray also stated, Venegas-Gomez acknowledges that the skills and knowledge that physicists pick up can vary significantly between universities – making it challenging for employers to find the right candidates. To avoid picking the wrong course for you, Venegas-Gomez recommends that potential master’s and PhD students speak to a number of alumni from any given institute to find out more about the course, and see what areas they work in today. This can also be a great networking strategy, especially as some cohorts can have as few as 10–15 students all keen work with these companies or university departments in the future.

Despite the interest and investment in the quantum industry, new recruits should note that it is is still in its early stages. This slow progress can lead to high expectations that are not met, causing frustration for both employers and potential employees. “Only today, we had an employer approach us (QURECA) saying that they wanted someone with three to four years’ experience in Python, and a bachelor’s or master’s degree – it didn’t have to be quantum or even physics specifically,” reveals Venegas-Gomez. “This means that [to get this particular job] you could have a background in computer science or software engineering. Having an MSc in quantum per se is not going to guarantee that you get a job in quantum technologies, unless that is something very specific that employer is looking for.”

So what specific competencies are employers across the board looking for? If an company isn’t looking for a specific technical qualification, what happens if they get two similar CVs for the same role? Do they look at an applicant’s research output and publications, or are they looking for something different? “What I find is that employers are looking for candidates who can show that, alongside their academic achievements, they have been doing outreach and communication activities,” says Venegas-Gomez. “Maybe you took on a business internship and have a good idea of how the industry works beyond university – this is what will really stand out.”

She adds that so-called soft-skills – such as demonstrating good leadership, teamwork, and excellent communication skills – are very valued. “This is an industry where highly skilled technical people need to be able to work with people vastly beyond their area of expertise. You need to be able to explain Hamiltonians or error corrections to someone who is not quantum-literate and explain the value of what you are working on.”

Venegas-Gomez is also keen that job-seekers realize that the chances of finding a role at a large firm such as Google, IBM or Microsoft are still slim-to-none for most quantum graduates. “I have seen a lot of people complete their master’s in a quantum field and think that they will immediately find the perfect job. The reality is that they likely need to be patient and get some more experience in the field before they get that dream job.” Her main advice to students is to clearly define their career goals, within the context of the booming and ever-growing quantum market, before pursuing a specific degree. The skills you acquire with a quantum degree are also highly transferable to other fields, meaning there are lots of alternatives out there even if you can’t find the right job in the quantum sphere. For example, experience in data science or software development can complement quantum expertise, making you a versatile and coveted contender in today’s job market.

Approaching “quantum advantage”: Mark Elo

Last year, IBM broke records by building the first quantum chip with more than 1000 qubits. The project represents millions of dollars of investment and the company is competing with the likes of Intel and Google to achieve “quantum advantage”, which refers to a quantum computer that can solve problems that are out of reach for classical machines.

Despite the hype, there is work to be done before the technology becomes widespread – a commercial quantum computer needs millions of qubits, and challenges in error correction and algorithm efficiency must be addressed.

Mark Elo

“We’re trying to move it away from a science experiment to something that’s more an industrial product,” says Mark Elo, chief marketing officer at Tabor Electronics. Tabor has been building electronic signal equipment for over 50 years and recently started applying this technology to quantum computing. The company’s focus is on control systems – classical electronic signals that interact with quantum states. At the 2024 APS March Meeting, Tabor, alongside its partners FormFactor and QuantWare, unveiled the first stage of the Echo-5Q project, a five-qubit quantum computer.

Elo describes the five years he’s worked on quantum computing as a period of significant change. Whereas researchers once relied on “disparate pieces of equipment” to build experiments, he says that the industry has changed such that “there are [now] products designed specifically for quantum computing”.

The ultimate goal of companies like Tabor is a “full-stack” solution where software and hardware are integrated into a single platform. However, the practicalities of commercializing quantum computing require a workforce with the right skills. Two years ago the consultancy company McKinsey reported that companies were already struggling to recruit, and they predicted that by 2025, half of the jobs in quantum computing will not be filled. Like many in the industry, Elo sees skills gaps in the sector that must be addressed to realize the potential of quantum technology.

Elo’s background is in solid-state electronics, and he worked for nearly three decades on radio-frequency engineering for companies including HP and Keithley. Most quantum-computing control systems use radio waves to interface with the qubits, so when he moved to Tabor in 2019, Elo saw his career come “full circle”, combining the knowledge from his degree with his industry experience. “It’s been like a fusion of two technologies” he says.

It’s at this interface between physics and electronic engineering where Elo sees a skills shortage developing. “You need some level of electrical engineering and radio-frequency knowledge to lay out a quantum chip,” he explains. “The most common qubit is a transmon, and that is all driven by radio waves. Deep knowledge of how radio waves propagate through cables, through connectors, through the sub-assemblies and the amplifiers in the refrigeration unit is very important.” Elo encourages physics students interested in quantum computing to consider adding engineering – specifically radio-frequency electronics – courses to their curricula.

Transferable skills

The Tabor team brings together engineers and physicists, but there are some universal skills it looks for when recruiting. People skills, for example, are a must. “There are some geniuses in the world, but if they can’t communicate it’s no good in an industrial environment,” says Elo.

Elo describes his work as “super exciting” and says “I feel lucky in the career and the technology I’ve been involved in because I got to ride the wave of the cellular revolution all the way up to 5G and now I’m on to the next new technology.” However, because quantum is an emerging field, he thinks that graduates need to be comfortable with some risk before embarking on a career. He explains that companies don’t always make money right now in the quantum sector – “you spend a lot to make a very small amount”. But, as Elo’s own career shows, the right technical skills will always allow you to switch industries if needed.

Like many others, Elo is motivated by the excitement of competing to commercialize this new technology. “It’s still a market that’s full of ideas and people marketing their ideas to raise money,” he says. “The real measure of success is to be able to look at when those ideas become profitable. And that’s when we know we’ve crossed a threshold.”

The post Taking the leap – how to prepare for your future in the quantum workforce appeared first on Physics World.

]]>
Feature Katherine Skipper and Tushna Commissariat interview three experts in the quantum arena, to get their advice on careers in the quantum market https://physicsworld.com/wp-content/uploads/2024/09/2024-09-GRADCAREERS-computing-abstract-1190168517-iStock_blackdovfx.jpg newsletter1
BepiColombo takes its best images yet of Mercury’s peppered landscape https://physicsworld.com/a/bepicolombo-takes-its-best-images-yet-of-mercurys-peppered-landscape/ Fri, 06 Sep 2024 10:18:45 +0000 https://physicsworld.com/?p=116616 The spacecraft had a clear view of Mercury’s south pole for the first time during a recent flyby

The post BepiColombo takes its best images yet of Mercury’s peppered landscape appeared first on Physics World.

]]>
The BepiColombo mission to Mercury – Europe’s first craft to the planet – has successfully completed its fourth gravity-assist flyby as it uses the planet’s gravity to enter orbit around Mercury in November 2026. As it did so, the craft captured its best images yet of some of Mercury’s largest impact craters.

BepiColombo, which launched in 2018, comprises two science orbiters that will circle Mercury – the European Space Agency’s Mercury Planetary Orbiter (MPO) and the Japan Aerospace Exploration Agency’s Mercury Magnetospheric Orbiter (MMO).

The two spacecraft are travelling to Mercury as part of a coupled system. When they reach the planet, the MMO will study Mercury’s magnetosphere while the MPO will survey the planet’s surface and internal composition.

The aim of the BepiColombo mission is to provide information on the composition, geophysics, atmosphere, magnetosphere and history of Mercury.

The closest approach so far for the mission – about 165 km above the planet’s surface – took place at on 4 September. For the first time, the spacecraft had a clear view of Mercury’s south pole.

Mercury by BepiColombo

One image (top), taken by the craft’s M-CAM2 camera, features a large “peak ring basin” inside a crater measuring 210 km across, which is named after the famous Italian composer Antonio Vivaldi. The visible gap in the peak ring is thought to be where more recent lava flows have entered and flooded the crater.

BepiColombo will now conduct a fifth and sixth flyby of the planet on 1 December and 8 January 2025, respectively, before arriving in November 2025. The mission is planned to operate until 2029.

The post BepiColombo takes its best images yet of Mercury’s peppered landscape appeared first on Physics World.

]]>
Blog The spacecraft had a clear view of Mercury’s south pole for the first time during a recent flyby https://physicsworld.com/wp-content/uploads/2024/09/Mercury_reveals_its_Four_Seasons-small.jpg
Hybrid quantum–classical computing chips and neutral-atom qubits both show promise https://physicsworld.com/a/hybrid-quantum-classical-computing-chips-and-neutral-atom-qubits-both-show-promise/ Thu, 05 Sep 2024 15:33:03 +0000 https://physicsworld.com/?p=116604 Equal1’s Elena Blokhina and Harvard’s Brandon Grinkemeyer are our guests

The post Hybrid quantum–classical computing chips and neutral-atom qubits both show promise appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast looks at quantum computing from two different perspectives.

Our first guest is Elena Blokhina, who is chief scientific officer at Equal1 – an award-winning company that is developing hybrid quantum–classical computing chips. She explains why Equal1 is using quantum dots as qubits in its silicon-based quantum processor unit.

Next up is Brandon Grinkemeyer, who is a PhD student at Harvard University working in several cutting-edge areas of quantum research. He is a member of Misha Lukin’s research group, which is active in the fields of quantum optics and atomic physics and is at the forefront of developing  quantum processors that use arrays of trapped atoms as qubits.

The post Hybrid quantum–classical computing chips and neutral-atom qubits both show promise appeared first on Physics World.

]]>
Podcasts Equal1’s Elena Blokhina and Harvard’s Brandon Grinkemeyer are our guests https://physicsworld.com/wp-content/uploads/2024/09/Brandon-and-Elena.jpg
Researchers with a large network of unique collaborators have longer careers, finds study https://physicsworld.com/a/researchers-with-a-large-network-of-unique-collaborators-have-longer-careers-finds-study/ Thu, 05 Sep 2024 15:00:02 +0000 https://physicsworld.com/?p=116555 Female scientists tend to work in more tightly connected groups than men, which can negatively impact their careers

The post Researchers with a large network of unique collaborators have longer careers, finds study appeared first on Physics World.

]]>
Are you keen to advance your scientific career? If so, it helps to have a big network of colleagues and a broad range of unique collaborators, according to a new analysis of physicists’ publication data. The study also finds that female scientists tend to work in more tightly connected groups than men, which can hamper their career progression.

The study was carried out by a team led by Mingrong She, a data analyst at Maastricht University in the Netherlands. It examined the article history of more than 23,000 researchers who had published at least three papers in American Physical Society (APS) journals. Each scientist’s last paper had been published before 2015, suggesting their research career had ended (arXiv:2408.02482).

To measure “collaboration behaviour”, the study noted the size of each scientist’s collaborative network, the reoccurrence of collaborations, the “interconnectivity” of the co-authors and the average number of co-authors per publication. Physicists with larger networks and a greater number of unique collaborators were found to have had longer careers and been more likely to become principal investigators, as given by their position in the author list.

On the other hand, publishing repeatedly with the same highly interconnected co-authors is associated with shorter careers and a lower chance of achieving principal investigator status, as is having a larger average number of coauthors.

The team also found that the more that physicists publish with the same co-authors, the more interconnected their networks become. Conversely, as network size increases, networks tended to be less dense and repeat collaboration less frequent.

Close-knit collaboration

In terms of gender, the study finds that women have more interconnected networks and a higher average number of co-authors than men. Female physicists are also more likely to publish repeatedly with the same co-authors, with women therefore being less likely than men to become principal investigators. Male scientists also have longer overall careers and stay in science longer after achieving principal investigator status than women, the study finds.

Collaborating with experts from diverse backgrounds introduces novel perspectives and opportunities

Mingrong She

“Collaborating with experts from diverse backgrounds introduces novel perspectives and opportunities [and] increases the probability of establishing connections with prominent researchers and institutions,” She told Physics World. Diverse collaboration also “mitigates the risk of being confined to a narrow niche and enhances adaptability” she adds,”both of which are indispensable for long-term career growth”.

Close-knit collaboration networks can be good for fostering professional support, the study authors state, but they reduce opportunities for female researchers to form new professional connections and lower their visibility within the broader scientific community. Similarly, larger numbers of co-authors dilute individual contributions, making it harder for female researchers to stand out.

She says the study “highlights how the structure of collaboration networks can reinforce existing inequalities, potentially limiting opportunities for women to achieve career longevity and progression”. Such issues could be improved with policies that help scientists to engage a wider array of collaborators, rewarding and encouraging small-team publications and diverse collaboration. Policies could include adjustments to performance evaluations and grant applications, and targeted training programmes.

The study also highlights lower mobility as a major obstacle for female scientists, suggesting that better childcare support, hybrid working and financial incentives could help improve the mobility and network size of female scientists.

The post Researchers with a large network of unique collaborators have longer careers, finds study appeared first on Physics World.

]]>
News Female scientists tend to work in more tightly connected groups than men, which can negatively impact their careers https://physicsworld.com/wp-content/uploads/2024/09/social-network-505782242-iStock_Ani_Ka.jpg
Shrinivas Kulkarni: curiosity and new technologies inspire Shaw Prize in Astronomy winner https://physicsworld.com/a/shrinivas-kulkarni-curiosity-and-new-technologies-inspire-shaw-prize-in-astronomy-winner/ Thu, 05 Sep 2024 10:58:03 +0000 https://physicsworld.com/?p=116565 "No shortage of phenomena to explore," says expert on variable and transient objects

The post Shrinivas Kulkarni: curiosity and new technologies inspire Shaw Prize in Astronomy winner appeared first on Physics World.

]]>
What does Shrinivas Kulkarni finds fascinating? When I asked him that question I expected an answer related to his long and distinguished career in astronomy. Instead, he talked about how the skin of sharks has a rough texture, which seems to reduce drag – allowing the fish to swim faster. He points out that you might not win a Nobel prize for explaining the hydrodynamics of shark skin, but it is exactly the type of scientific problem that captivates Kulkarni’s inquiring mind.

But don’t think that Kulkarni – who is George Ellery Hale Professor of Astronomy and Planetary Sciences at the California Institute of Technology (Caltech) – is whimsical when it comes to his research interests. He says that he is an opportunist, especially when it comes to technology, which he says makes some research questions more answerable than others. Indeed, the scientific questions he asks are usually guided by his ability to build technology that can provide the answers.

Kulkarni won the 2024 Shaw Prize in Astronomy for his work on variable and transient astronomical objects. He says that the rapid development of new and powerful technologies has meant that the last few decades been a great time to study such objects. “Thirty years ago, the technology was just not there,” he recalls, “optical sensors were too expensive and the necessary computing power was not available.

Transient and variable objects

Kulkarni told me that there are three basic categories of transient and variable objects. One category covers objects that change position in the sky – with examples including planets and asteroids. A second category includes objects that oscillate in terms of their brightness.

“About 10% of stars in the sky do not shine steadily like the Sun,” he explains. “We are lucky that the Sun is an extremely steady star. If its output varied by just 1% it would have a huge impact on Earth – much larger than the current global warming. But many stars do vary at the 1% level for a variety of reasons.” These can be rotating stars with large sunspots or stars eclipsing in binary systems, he explains.

It might surprise you that every second, somewhere in the universe, there is a supernova

The third and most spectacular category involve stars that undergo rapid and violent changes such as stars that explode as supernovae. “It might surprise you that every second, somewhere in the universe, there is a supernova. Some are very faint, so we don’t see all of them, but with the Zwicky Transient Facility (ZTF) we see about 20,000 supernovae per year.” Kulkarni is principal investigator for the ZTF, and his leadership at that facility is mentioned in his Shaw Prize citation.

Kulkarni explains that astronomers are interested in transient and variable objects for many different reasons. Closer to home, scientists monitor the skies for asteroids that may be on collision courses with Earth.

“In 1908 there was a massive blast in Siberia called the Tunguska event,” he says. This is believed to be the result of the air explosion of a rocky meteor that was about 55 m in diameter. Because it happened in a remote part of the world, only three people are known to have been killed. Kulkarni points out that if such a meteor struck a populated area like Southern California, it would be catastrophic. By studying and cataloguing asteroids that could potentially strike Earth, Kulkarni believes that we could someday launch space missions that nudge away objects on collision courses with Earth.

Zwicky Transient Facility

At the other end of the mass and energy range, Kulkarni says that studying spectacular events such as supernovae provides important insights into origins of many of the elements that make up the Earth and indeed ourselves. He says that over the past 70 years astronomers have made “amazing progress” in understanding how different elements are created in these explosions.

Kulkarni was born in1956 in Kurundwad, which is in the Indian state of Maharashtra. In 1978, he graduated with an MS degree in physics from the Indian Institute of Technology in New Delhi. His next stop was the University of California, Berkeley, where he completed a PhD in astronomy in 1983. He joined Caltech in 1985 and has been there ever since.

You could say that I live on adrenaline and I want to produce something very fast, making significant progress in in a short time

A remarkable aspect of Kulkarni’s career is his ability to switch fields every 5–10 years, something that he puts down to his curious nature. “After I understand something to a reasonable level, I lose interest because the curiosity is gone,” he says. Kulkarni adds that his choice of a new project is guided by his sense of whether rapid progress can be made in the field. “You could say that I live on adrenaline and I want to produce something very fast, making significant progress in in a short time”.

He gives the example of his work on gamma-ray bursts, which are some of the most powerful explosions in the universe. He says that this was a very fruitful field when astronomers were discovering about one burst per month. But then the Neil Gehrels Swift Observatory was launched in 2004 and it was able to detect 100 or so gamma-ray bursts per year.

Looking for new projects

At this point, Kulkarni says that studying bursts became a “little industry” and that’s why he left the field. “All the low-hanging fruit had been picked – and when the fruit is higher on the tree, that is when I start looking for new projects”.

It is this restlessness that first got him involved in the planning and operation of two important instruments, the Palomar Transient Factory (PTF) and its successor the Zwicky Transient Facility (ZTF). These are wide-field sky astronomical surveys that look for rapid changes in the brightness or position of astronomical objects. The PTF began observing in 2009 and the ZTF took over in 2018.

Kulkarni says that he is fascinated by the engineering aspects of astronomy and points out that technological advances in sensors, electronics, computing and automation continue to transform how observational astronomy is done. He explains that all of these technological factors came together in the design and operation of the PTF and the ZTF.

His involvement with PTF and ZTF allowed Kulkarni to make many exciting discoveries during his career. However, his favourite variable object is one that he discovered in 1982 while doing a PhD under Donald Backer. Called PSR B1937+21, it is the first millisecond pulsar ever to be to observed. It is a neutron star that rotates more than 600 times per second while broadcasting a beam of radio waves much like a lighthouse.

“I was there [at the Arecibo Observatory] all alone… it was very thrilling,” he says. The discovery provided insights into the density of neutron stars and revitalized the study of pulsars, leading to large-scale surveys that target pulsars.

When you find a new class of objects, there’s a certain thrill knowing that you and your students are the only people in the world to have seen something

Another important moment for Kulkarni occurred in 1994, when he and his graduate students were the first to observe a cool brown dwarf. These are objects that weigh in between gas-giant planets (like Jupiter) and small main-sequence stars. “When you find a new class of objects, there’s a certain thrill knowing that you and your students are the only people in the world to have seen something. That was kind of fun.”

Kulkarni is proud of his early achievements, but don’t think that he dwells on the past. “This is a fantastic time to do astronomy. The instruments that we’re building today have an enormous capacity for information delivery.”

First brown dwarf

He mentions images released by the European Space Agency’s Euclid space telescope, which launched last year. He describes them as “gorgeous pictures” but points out that the real wonder is that he could zoom in on the images by a factor of 10 before the pixels became apparent. “It was just so rich, a single image is maybe a square degree of the sky. The resolution is just amazing.”

And when it comes to technology, Kulkarni is adamant that it’s not only bigger and more expensive telescopes that are pushing the frontiers of astronomy. “There is more room sideways,” he says, meaning that much progress can be made by repurposing existing facilities.

Indeed, ZTF and PTF both use (used)  the  Samuel Oschin telescope at the Palomar Observatory in California. This is a 48-inch (1.3 metre) facility that saw first light 75 years ago. With new instruments, old telescopes can be used to study the sky “ferociously” he says.

Kulkarni told me that even he was surprised at the number of papers that ZTF data have spawned since the facility came online in 2018. One important reason, says Kulkarni, is that ZTF immediately shares its data freely with astronomers around the world. Indeed, it is the explosion in data from facilities like the ZTF along with rapid improvements in data processing that Kulkarni believes has put us in a  golden age of astronomy.

Beyond the technology, Kulkarni says that the very nature of the cosmos means that there will always be opportunities for astronomers. He muses that the universe has been around for nearly 14 billion years and has had “many opportunities to do some very strange things – and a very long time to cook up those things – so there’s no shortage of phenomena to explore”.

Great time to be an astronomer

So it is a great time to consider a career in astronomy and Kulkarni’s advice to aspiring astronomers is to be pragmatic about how they approach the field. “Figure out who you are and not you want to be,” he says. “If you want to be an astronomer. There are roughly three categories open to you. You can be a theorist who puts a lot of time understand the physics, and especially the mathematics, that are used to make sense of astronomical observations.”

At the other end of the spectrum are the astronomers who build the “gizmos” that are used to scan the heavens – generating the data that the community rely on. The third category, says Kulkarni, falls somewhere between these two extremes and includes the modellers. These are the people who take the equations developed by the theorists and create computer models that help us understand observational data.

“Astronomy is a fantastic field and things are really happening in a very big way.” He asks new astronomers to, “Bring a fresh perspective, bring energy, and work hard”. He also says that success comes to those who are willing to reflect on their strengths and weaknesses. “Life is a process of continual improvement, continual education, and continual curiosity.”

The post Shrinivas Kulkarni: curiosity and new technologies inspire Shaw Prize in Astronomy winner appeared first on Physics World.

]]>
Analysis "No shortage of phenomena to explore," says expert on variable and transient objects https://physicsworld.com/wp-content/uploads/2024/09/5-9-24-Kulkarni_Shri-Faculty.jpg newsletter
Twisted fibres capture more water from fog https://physicsworld.com/a/twisted-fibres-capture-more-water-from-fog/ Wed, 04 Sep 2024 14:00:41 +0000 https://physicsworld.com/?p=116579 New finding could allow more fresh water to be harvested from the air

The post Twisted fibres capture more water from fog appeared first on Physics World.

]]>
Twisted fibres are more efficient at capturing and transporting water from foggy air than straight ones. This finding, from researchers at the University of Oslo, Norway, could make it possible to develop advanced fog nets for harvesting fresh water from the air.

In many parts of the world, fresh water is in limited supply and not readily accessible. Even in the driest deserts, however, the air still contains some humidity, and with the right materials, it is possible to retrieve it. The simplest way of doing this is to use a net to catch water droplets that condense on the material for later release. The most common types of net for this purpose are made from steel extruded into wires; plastic fibres and strips; or woven poly-yarn. All of these have uniform cross-sections and are therefore relatively smooth and straight.

Nature, however, abounds with slender, grooved and bumpy structures that plants and animals have evolved to capture water from ambient air and quickly transport droplets where they need to go. Cactus spines, nepenthes plants, spider spindle silk and Namib desert beetle shells are just a few examples.

From “barrel” to “clamshell”

Inspired by these natural structures, Vanessa Kern and Andreas Carlson of the mechanics section in Oslo’s Department of Mathematics placed water droplets on two vertical threads that they had mechanically twisted together. They then recorded the droplets’ flow paths using high-speed imaging.

By changing the tightness, or wavelength, of the twist, the researchers were able to control when the droplet changed from its originally symmetric “barrel” shape to an asymmetric “clamshell” configuration. This allowed the researchers to speed up or slow down the droplets’ flow. While this is not the first time that scientists have succeeded in changing the shapes of droplets sliding down fibres, most previous work focused on perfectly wetting liquids, rather than partially wetting ones as was the case here.

Once they understood the droplets’ dynamics, Kern and Carlson designed nets that could be pre-programmed with anti-clogging properties. They then analysed the twisted fibres’ ability to collect water from fog flowing through an experimental wind tunnel, plotting the fibres’ water yield as a function of how much they were twisted.

Grooves that work as a water slide

The Oslo team found that the greater the number of twists, the more water the fibres captured. Notably, the increase was greater than would be expected from an increase in surface area alone. The team say this implies that the geometry of the twists is more important than area in increasing fog capture.

“Introducing a twist allowed us to effectively form grooves that work as a water slide as it stabilises a liquid film,” Kern explains. “This alleviates the well-known problem of straight fibres, where droplets would get stuck/pinned.”

The twisted fibres would make good fog nets, adds Carlson. “Fog nets are typically made up of plastic fibres and used to harvest fresh water from fog in arid regions such as in Morocco. Our results indicate that these twisted fibres could indeed be beneficial in terms of increasing the efficiency of such nets compared to straight fibres.”

The researchers are now working on testing their twisted fibres in a wider range of wind and fog conditions. They hope these tests will show which environments the fibres work best in, and where they might be most suitable for water harvesting. “We also want to move towards conditions closer to those found in the field,” they say. “There are still many open questions about the small-scale physics of the flow inside the grooves between these fibres that we want to answer too.”

The study is detailed in PNAS.

The post Twisted fibres capture more water from fog appeared first on Physics World.

]]>
Research update New finding could allow more fresh water to be harvested from the air https://physicsworld.com/wp-content/uploads/2024/09/24-02252-1.jpg
Robot-cooked pizza delivered to your door? Here’s what Zume’s failure tells us https://physicsworld.com/a/robot-cooked-pizza-delivered-to-your-door-heres-what-zumes-failure-tells-us/ Wed, 04 Sep 2024 10:00:33 +0000 https://physicsworld.com/?p=116354 James McKenzie looks at the reasons behind the failure of the Zume robotic pizza-delivery business

The post Robot-cooked pizza delivered to your door? Here’s what Zume’s failure tells us appeared first on Physics World.

]]>
A red truck and small car behind it, from the Zume Pizza company

“The $500 million robot pizza start-up you never heard of has shut down, report says.”

Click-bait headlines don’t always tell the full story and this one is no exception. It appeared  last year on the Business Insider website and concerned Zume – a Silicon Valley start-up backed by Japanese firm SoftBank, which once bought chip-licensing firm Arm Holdings. Zume proved to be one of the biggest start-up failures in 2023, burning through nearly half a billion dollars of investment (yes, half a billion dollars) before closing down.

Zume was designed to deliver pizzas to customers in vans, with the food prepared by robots and cooked in GPS-equipped automated ovens. The company was founded in 2015 as Zume Pizza, delivering its first pizzas the year after. But according to Business Insider, which retold a story from The Information, Zume struggled with problems like “stopping melted cheese from sliding off its pizzas while they cooked in moving trucks”.

It’s easy to laugh, but the headline from Business Insider belittled the start-up founders and their story

It’s easy to laugh, but the headline from Business Insider belittled the start-up founders and their story. Unless you’ve set up your own firm, you probably won’t understand the passion, dedication and faith needed to found or join a start-up team. Still, from a journalistic point of view, the headline did the trick in that it encouraged me to delve further into the story. Here’s what I think we can learn from the episode.

A new spin on pizza

On the face of it, Zume is a brilliant and compelling idea. You’re taking the two largest costs of the pizza-delivery business – chefs to cook the food and premises to house the kitchen – and removing them from the equation. Instead, you’re replacing them with delivery vehicles that make the food automatically en-route, potentially even using autonomous vehicles that don’t need human drivers either. What could possibly go wrong?

Zume, which quickly raised $6m in Series A investment funding in 2016, delivered its first pizzas in September of that year. The company secured a patent on cooking during delivery, which included algorithms to predict customer choices. It also planned to work with other firms to provide further robot-prepared food, such as salads and desserts.

By 2018 the concept had captured the imagination of major investors such as SoftBank, which saw the potential for Zume to become the “Amazon of pizza”. The scope was huge: the overall global pizza market was worth $197bn in 2021 and is set to grow to $551bn by 2031, according to market research firm Business Research Insights. So it should be possible to grab a piece of the pie with enough funding and focused, disruptive innovation.

But with customers complaining about the robotic pizzas, the company said in 2018 it was moving in new directions. Instead, it now planned to use artificial intelligence (AI) and its automated production technology for automated food trucks and would form a larger umbrella company – Zume, Inc. It also planned to start licensing its automation technology.

In November 2018 the company raised $375m from SoftBank, now making it worth an eye-popping $2.25bn. It then started focusing on automated production and packaging for other food firms, including buying Pivot – a company that made sustainable, plant-based packaging. By 2020 it was concentrating fully on compostable food packaging and then laid off more than 500 staff, including its entire robotics and delivery truck teams.

Sadly, Zume, Inc was unable to sustain enough sales or bring in enough external funding. Investment cash started running precariously low and in June 2023 the firm was eventually shut down, leaving “joke” headlines about cheese sliding off. How very sad for all involved, but this was only a small part of the issues the company faced.

Inside Zume

Many have speculated where it all went wrong for Zume. To me, the problem seemed to be execution and understanding of the market. The food industry is dominated by lots of dominant established brands, big advertising budgets and huge promotions. When faced with these kinds of challenges, any new business must work out how to compete, break into and disrupt this kind of established market.

Once I looked into what happened at Zume, it wasn’t quite as amazing as I initially thought. To my mind, the logical thing would have been to have all the operations on the truck. But according to a video released by the company in 2016, that’s not what they did. Instead, Zume built an overly complex robot production line in a larger space than a traditional pizza outlet to make the pizzas.

The food was then loaded onto trucks and cooked en-route in a van equipped with 56 automated ovens. Each was timed so that the pizza would be ready shortly before it arrived at the customer’s address. Zume had an app and aimed to cut the delivery time from order to delivery to 22 minutes – which was pretty good. But the app in itself wasn’t a big innovation; after all Domino’s first had one as far back as 2010.

In American start-up culture, failure is not an embarrassment. It’s seen as a learning experience, and looking at the mistakes of history can yield some valuable insights. But then I stumbled upon a really great article by a firm called Legendary Supply Chain that spelled out clearly what happened. Turns out, what really went wrong was Zume’s lack of understanding of the drivers and economics of the pizza-delivery business.

The 3Ps of pizza

Pizzas have a tiny profit margin. But Zume created massive capital costs by developing automation systems, which meant they’d have to sell loads of pizza to make enough return on investment. Worse still, using FedEx-sized trucks to deliver individual pizzas was inherently wasteful and impractical. That’s why you’ll usually see most pizza delivery drivers on bicycles, mopeds or cars, which are a far more cost-effective means of delivery.

You could say that Zume re-invented the wheel by re-creating – at great cost – the automation you find in frozen-pizza factories and applying it to a much smaller scale operation. It also seems that the firm didn’t focus enough on the product or what the customers wanted – and instead seemed to solve problems that didn’t exist. In short, the execution was poor and the $400m raised rather went to managers’ heads.

Countless successful companies prove what’s vital are the “3Ps”: product, price and promotion. People buy pizza on an impulse. For me, whenever the idea of a pizza pops into my head, I want something that’s yummy, saves me from cooking and perhaps reminds me of Italian holidays. According to customer feedback, Zume’s pizza was only “okay”. Apart from the cheese occasionally sliding off, it wasn’t any better or worse than anything else.

As far as price was concerned, Zume’s pizzas ought to have been cheaper to make and deliver than rival firms. However, Zume charged a premium price on account of the food being slightly fresher as it was cooked while being delivered. Customers, unfortunately, didn’t buy into this argument sufficiently. I’m not sure what Zume did to promote their products, but with all that money sloshing around, they certainly had more than enough to create a brand.

Zume’s failure won’t be the last attempt to disrupt or break into the pizza-delivery market – and learning from past mistakes could well help

I’m sure Zume’s failure won’t be the last attempt to disrupt or break into the pizza-delivery market – and learning from past mistakes could well help. In fact, I can see why putting sufficiency low-cost automation on a fleet of small vans – coupled with low-cost, central supply depots – might make the economics more favourable. But anyone wanting to revolutionize pizza delivery will have to map out the costs and economics of pizza delivery to get funded and have some good answers to where Zume went wrong.

The odds for start-up success are not good. As I’ve mentioned before, almost 90% of start-ups in the UK survive their first year, but fewer than half make it beyond five years. To get there – whether you’re making pizzas or photodetectors – you’ll need a good plan, a great team, a degree of luck and good timing to compete in the market. But if you do succeed, the rewards are clear.

The post Robot-cooked pizza delivered to your door? Here’s what Zume’s failure tells us appeared first on Physics World.

]]>
Opinion and reviews James McKenzie looks at the reasons behind the failure of the Zume robotic pizza-delivery business https://physicsworld.com/wp-content/uploads/2024/08/2024-09-Transactions-Pizzaslice_feature.jpg newsletter
Quark distribution in light–heavy mesons is mapped using innovative calculations https://physicsworld.com/a/quark-distribution-in-light-heavy-mesons-is-mapped-using-innovative-calculations/ Wed, 04 Sep 2024 07:27:27 +0000 https://physicsworld.com/?p=116554 Form factors can be tested by collider experiments

The post Quark distribution in light–heavy mesons is mapped using innovative calculations appeared first on Physics World.

]]>
The distribution of quarks inside flavour-asymmetric mesons has been mapped by Yin-Zhen Xu of the University of Huelva and Pablo de Olavide University in Spain. These mesons are strongly interacting particles composed of a quark and an antiquark, one heavy and one light.

Xu employed the Dyson–Schwinger/Bethe–Salpeter equation technique to calculate the heavy–light meson electromagnetic form factors, which can be measured in collider experiments. These form factors provide invaluable information about the properties of the strong interactions as described by quantum chromodynamics.

“The electromagnetic form factors, which describe the response of composite particles to electromagnetic probes, provide an important tool for understanding the structure of bound states in quantum chromodynamics,” explains Xu. “In particular, they can be directly related to the charge distribution inside hadrons.”

From numerous experiments, we know that particles that interact via the strong force (such as protons and neutrons) consist of quarks bound together by gluons. This similar to how nuclei and electrons are bound into atoms through the exchange of photons as described by quantum electrodynamics. However, doing precise calculations in quantum chromodynamics is nearly impossible, and this makes predicting the internal structure of hadrons extremely challenging.

Approximation techniques

To address this challenge, scientists have developed several approximation techniques. One such method is the lattice approach, which replaces the infinite number of points in real space with a finite grid, making calculations more manageable. Another effective method involves solving the Dyson–Schwinger/Bethe–Salpeter equations. They ignore certain subtle effects in the strong interactions of quarks with gluons, as well as the virtual quark–antiquark pairs that are constantly being born and disappearing in the vacuum.

Xu’s new study is described in the Journal for High Energy Physics, utilized the Dyson-Schwinger/Bethe-Salpeter approach to investigate the properties of hadrons made of quarks and antiquarks of different types (or flavors) with significant mass differences. For instance, K-mesons are composed of a strange antiquark with a mass of around 100 MeV and an up or down quark with a mass of only a few megaelectronvolts. The substantial difference in quark masses simplifies their interaction, which allows the author to extract more information about the structure of flavour-asymmetric mesons.

Xu began his study by calculating the masses of mesons and compared these results with experimental data. He found that the Dyson–Schwinger/Bethe–Salpeter method produced results comparable to the best previously used methods, validating his approach.

Deducing quark distributions

Xu’s next step was to deduce the distribution of quarks within the mesons. Quantum effects prevent particles from being localized in space, so he calculated the probability of their presence in certain regions, whose size depends on the properties of the quarks and their interactions with surrounding particles.

Xu discovered that the heavier the quark, the more localized it is within the meson with the difference in the distribution range reaching more than ten times. For instance, in B-mesons, the distribution range for a bottom antiquark is much smaller (0.07 fm) compared to that for the much lighter up or down quarks (0.80 fm). In contrast, the distribution range for two light quarks inside π-mesons is almost equal.

Using these quark distributions, Xu then computed the electromagnetic form factors, which encode the details of charge and current distribution within the mesons. The values he obtained closely matched the available experimental data.

In his work, Xu has shown that the Dyson–Schwinger/Bethe–Salpeter technique is particularly well-suited for studying heavy-light mesons, often surpassing even the most sophisticated and resource-intensive methods used previously.

Room for refinement

Although Xu’s results are promising, he admits that there is room for refinement. On the experimental side, measuring some currently unknown form factors could allow comparisons with his computed values to further verify the method’s consistency.

From a theoretical perspective, more details about strong interactions within mesons could be incorporated into the Dyson–Schwinger/Bethe–Salpeter method to enhance computational accuracy. Additionally, other meson parameters can be computed using this approach, allowing more extensive comparisons with experimental data.

“Based on the theoretical framework applied in this work, other properties of heavy–light mesons, such as various decay rates, can be further investigated,” concludes Xu.

The study also provides a powerful tool for exploring the intricate world of strongly interacting subatomic particles, potentially opening new avenues in particle physics research.

The calculations are described in The Journal of High Energy Physics.

The post Quark distribution in light–heavy mesons is mapped using innovative calculations appeared first on Physics World.

]]>
Research update Form factors can be tested by collider experiments https://physicsworld.com/wp-content/uploads/2024/09/3-09-24-quantum-entanglement-web-465535389-iStock_Traffic-Analyzer.jpg newsletter1
Estonia becomes first Baltic state to join CERN https://physicsworld.com/a/estonia-becomes-first-baltic-state-to-join-cern/ Tue, 03 Sep 2024 12:45:08 +0000 https://physicsworld.com/?p=116548 The Baltic nation is now the 24th member state of the Geneva-based particle-physics lab

The post Estonia becomes first Baltic state to join CERN appeared first on Physics World.

]]>
Estonia is the first Baltic state to become a full member of the CERN particle-physics lab near Geneva. The country, which has a population of 1.3 million, formally became the 24th CERN member state on 30 August. Estonia is now expected to pay around €1.5m each year in membership fees.

Celebrating its 70th anniversary this year, CERN’s member countries, which include France, Germany and the UK, pay costs towards CERN’s programmes and sit on the lab’s governing council. Full membership also allows a country’s nationals to become CERN staff and for its firms to bid for CERN contracts. The lab also has 10 “associate member” and four countries or organizations with “observer” status, such as the US.

Accelerating collaborations

A first cooperation agreement between Estonia and CERN was signed in 1996, which was followed by a second agreement in 2010 with the country paying about €300,000 each year to the lab. Estonia formally applied for CERN membership in 2018 and on 1 February 2021 the country became an associate member state “in the pre-stage” to fully joining CERN.

Physicists in Estonia are already part of the CMS collaboration at the lab’s Large Hadron Collider (LHC) and they participate in data analysis and the Worldwide LHC Computing Grid (WLCG), in which a “tier 2” centre is located in Tallinn. Scientists from Estonia also contribute to other CERN experiments including CLOUD, COMPASS, NA66 and TOTEM, as well as work on future collider designs.

Estonia’s president, Alar Karis, who trained as a bioscientist, says he is “delighted” with the country’s full membership. “CERN accelerates more than tiny particles, it also accelerates international scientific collaboration and our economies,” Karis adds. “We have seen this potential during our time as associate member state and we are keen to begin our full contribution.”

CERN director general Fabiola Gianotti says she is “very pleased to welcome Estonia” as a full member. “I am sure the country and its scientific community will benefit from increased opportunities in fundamental research, technology development, and education and training.”

The post Estonia becomes first Baltic state to join CERN appeared first on Physics World.

]]>
News The Baltic nation is now the 24th member state of the Geneva-based particle-physics lab https://physicsworld.com/wp-content/uploads/2024/09/Estonia-flag-1495336833-iStock_Peter-Ekvall.jpg
Akiko Nakayama: the Japanese artist skilled in fluid mechanics https://physicsworld.com/a/akiko-nakayama-the-japanese-artist-skilled-in-fluid-mechanics/ Tue, 03 Sep 2024 10:00:11 +0000 https://physicsworld.com/?p=116458 Sidney Perkowitz explores the science behind the work of Japanese painter Akiko Nakayama

The post Akiko Nakayama: the Japanese artist skilled in fluid mechanics appeared first on Physics World.

]]>
Any artist who paints is intuitively an expert in empirical fluid mechanics, manipulating liquid and pigment for aesthetic effect. The paint is usually brushed onto a surface material, although it can also be splattered onto a horizontal canvas in a technique made famous by Jackson Pollock or even layered on with a palette knife, as in the works of Paul Cezanne or Henri Matisse. But however the paint is delivered, once it dries, the result is always a fixed, static image.

Japanese artist Akiko Nakayama is different. Based in Tokyo, she makes the dynamic real-time flow of paint, ink and other liquids the centre of her work. Using a variety of colours, she encourages the fluids to move and mix, creating gorgeous, intricate patterns that transmute into unexpected forms and shades.

What also sets Nakayama apart is that she doesn’t work in private. Instead, she performs public “Alive painting” sessions, projecting her creations onto large surfaces, to the accompaniment of music. Audiences see the walls of the venue covered with coloured shapes that arise from natural processes modified by her intervention. The forms look abstract, but in their mutations often resemble living creatures in motion.

Inspired by ink

Born in 1988, Nakayama was trained in conventional techniques of Eastern and Western painting, earning degrees in fine art from Tokyo Zokei University in 2012 and 2014. Her interest in dynamic art goes back to a childhood calligraphy class, where she found herself enthralled by the beauty of the ink flowing in the water while washing her brush.

“It was more beautiful than the characters [I had written],” she recalls, finding herself “fascinated by the freedom of the ink”. Later, while learning to draw, she always preferred to capture a “moment of time” in her sketches. Eventually, Nakayama taught herself how to make patterns from moving fluids, motivated by Johann Wolfgang von Goethe’s treatise Theory of Colours (1810).

Best known as a writer, Goethe also took a keen interest in science and his book critiques Isaac Newton’s work on the physical properties of light. Goethe instead offered his own more subjective insights into his experiments with colour and the beauty they produce. Despite its flaws as a physical theory of light, reading the book encouraged Nakayama to develop methods to pour and agitate various paints in Petri dishes, and to project the results in real time using a camera designed for close-up viewing.

Akiko Nakayama stands bottom right of a large screen that displays the artwork she is creating on stage

She started learning about liquids, reading research papers and even began examining the behaviour of water droplets under strobe lights. Nakayama also looked into studies of zero gravity on liquids by JAXA, the Japanese space agency. After finding a 10 ml sample of ferrofluid – a nanoscale ferromagnetic colloidal liquid – in a student science kit, she started using the material in her presentations, manipulating it with a small, permanent magnet.

Nakayama’s art has an unexpected link with space science because ferrofluids were invented in 1963 by NASA engineer Steve Papell, who sought a way to pump liquid rocket fuel in microgravity environments. By putting tiny iron oxide particles into the fuel, he found that the liquid could be drawn into the rocket engine by an electromagnet. Ferrofluids were never used by NASA, but they have many applications in industry, medicine and consumer products.

Secret science of painting

Having presented dozens of live performances, exhibitions and commissioned works in Japan and internationally over the last decade, other scientific connections have emerged for Nakayama. She has, for example, mixed acrylic ink with alcohol, dropping the fluid onto a thin layer of acrylic paint to create wonderfully intricate branched, tree-like dendritic forms.

In 2023 her painting caught the attention of materials scientists San To Chan and Eliot Fried at the Okinawa Institute of Science and Technology in Japan. They ended up working with Nakayama to analyse dendritic spreading in terms of the interplay of the different viscosities and surface tensions of the fluids (figure 1).

1 Magic mixtures

Images of 15 ink blots that have spread different amounts

When pure ink is dropped onto an acrylic resin substrate 400 mm thick, it remains fairly static over time (top). But if isopropanol (IPA) is mixed into the ink, the combined droplet spreads out to yield intricate, tree-like dendritic patterns. Shown here are drops with IPA at two different volume concentrations: 16.7% (middle) and 50% (bottom).

Chan and Fried published their findings, concluding that the structures have a fractal dimension of 1.68, which is characteristic of “diffusion-limited aggregation” – a process that involves particles clustering together as they diffuse through a medium (PNAS Nexus 3 59).

The two researchers also investigated the liquid parameters so that an experimentalist or artist could tune the arrangement to vary the dendritic results. Nakayama calls this result a “map” that allows her to purposefully create varied artistic patterns rather than “going on an adventure blindly”. Chan and Fried have even drawn up a list of practical instructions so that anyone inclined can make their own dendritic paintings at home.

Another researcher who has also delved into the connection between fluid dynamics and art is Roberto Zenit, a mechanical engineer at Brown University in the US. Zenit has shown that Jackson Pollock created his famous abstract compositions by carefully controlling the motion of viscous filaments (Phys. Rev. Fluids 4 110507). Pollock also avoided hydrodynamic instabilities that would have otherwise made the paint break up before it hit the canvas (PLOS One 14 e0223706).

Deeper meanings

Although Nakayama likes to explore the science behind her artworks, she has not lost sight of the deeper meanings in art. She told me, for example, that the bubbles that sometimes arise as she creates liquid shapes have a connection with the so-called “Vanitas” tradition in art that emerged in western Europe in the 16th and 17th centuries.

Derived from the Latin word for “vanity”, this kind of art was not about having an over-inflated belief in oneself as the word might suggest. Instead, these still-life paintings, largely by Dutch artists, would often have symbols and images that indicate the transience and fragility of life, such as snuffed-out candles with wisps of smoke, or fragile soap bubbles blown from a pipe.

A large screen showing a bubble in a field of blue

The real bubbles in Nakayama’s artworks always stay spherical thanks to their strong surface tension, thereby displaying – in her mind – a human-like mixture of strength and vulnerability. It’s not quite the same as the fragility of the Vanitas paintings, but for Nakayama – who acknowledges that she’s not a scientist – her works are all about creating “a visual conversation between an artist and science”.

Asked about her future directions in art, however, Nakayama’s response makes immediate sense to any scientist. “Finding universal forms of natural phenomena in paintings is a joy and discovery for me,” she says. “I would be happy to continue to learn about the physics and science that make up this world, and to use visual expression to say ‘the world is beautiful’.”

The post Akiko Nakayama: the Japanese artist skilled in fluid mechanics appeared first on Physics World.

]]>
Feature Sidney Perkowitz explores the science behind the work of Japanese painter Akiko Nakayama https://physicsworld.com/wp-content/uploads/2024/09/2024-09-Perkowitz-Nagayama-EternalArt.jpg newsletter
Open problem in quantum entanglement theory solved after nearly 25 years https://physicsworld.com/a/open-problem-in-quantum-entanglement-theory-solved-after-nearly-25-years/ Tue, 03 Sep 2024 08:30:44 +0000 https://physicsworld.com/?p=116537 Non-existence of universal maximally entangled isospectral mixed states has implications for research on quantum technologies

The post Open problem in quantum entanglement theory solved after nearly 25 years appeared first on Physics World.

]]>
A quarter of a century after it was first posed, a fundamental question about the nature of quantum entanglement finally has an answer – and that answer is “no”. In a groundbreaking study, Julio I de Vicente from the Universidad Carlos III de Madrid, Spain showed that so-called maximally entangled mixed states for a fixed spectrum do not always exist, challenging long-standing assumptions in quantum information theory in a way that has broad implications for quantum technologies.

Since the turn of the millennium, the Institute for Quantum Optics and Quantum Information (IQOQI) in Vienna, Austria, has maintained a conspicuous list of open problems in the quantum world. Number 5 on this list asks: “Is it true that for arbitrary entanglement monotones one gets the same maximally entangled states among all density operators of two qubits with the same spectrum?” In simpler terms, this question is essentially asking whether a quantum system can maintain its maximally entangled state in a realistic scenario, where noise is present.

This question particularly suited de Vicente, who has long been fascinated by foundational issues in quantum theory and is drawn to solving well-defined mathematical problems. Previous research had suggested that such a maximally entangled mixed state might exist for systems of two qubits (quantum bits), thereby maximizing multiple entanglement measures. In a study published in Physical Review Letters, however, de Vicente concludes otherwise, demonstrating that for certain rank-2 mixed states, no state can universally maximize all entanglement measures across all states with the same spectrum.

“I had tried other approaches to this problem that turned out not to work,” de Vicente tells Physics World. “However, once I came up with this idea, it was very quick to see that this gave the solution. I can say that I felt very excited seeing that such a relatively simple argument could be used to answer this question.”

Importance of entanglement

Mathematics aside, what does this result mean for real-world applications and for physics? Well, entanglement is a unique quantum phenomenon with no classical counterpart, and it is essential for various quantum technologies. Since our present experimental reach is limited to a restricted set of quantum operations, entanglement is also a resource, and a maximally entangled state (meaning one that maximizes all measures of entanglement) is an especially valuable resource.

One example of a maximally entangled state is a Bell state, which is one of four possible states for a system of two qubits that are each in a superposition of 0 and 1. Bell states are pure states, meaning that they can, in principle, be known with complete precision. This doesn’t necessarily mean they have definite values for properties like energy and momentum, but it distinguishes them from a statistical mixture of different pure states.

Maximally entangled mixed states

The concept of maximally entangled mixed states (MEMS) is a departure from the traditional view of entanglement, which has been primarily associated with pure states. Conceptually, when we talk about a pure state, we imagine a scenario where a device consistently produces the same quantum state through a specific preparation process. However, practical scenarios often involve mixed states due to noise and other factors.

In effect, MEMS are a bridge between theoretical models and practical applications, offering robust entanglement even in less-than-ideal conditions. This makes them particularly valuable for technologies like quantum encryption and quantum computing, where maintaining entanglement is crucial for performance.

What next?

de Vicente’s result relies on an entanglement measure that is constructed ad hoc and has no clear operational meaning. A more relevant version of this result for applications, he says, would be to “identify specific quantum information protocols where the optimal state for a given level of noise is indeed different”.

While de Vicente’s finding addresses an existing question, it also introduces several new ones, such as the conditions needed to simultaneously optimize various entanglement measures within a system. It also raises the possibility of investigating whether de Vicente’s theorems hold under other notions of “the same level of noise”, particularly if these arise in well-defined practical contexts.

The implications of this research extend beyond theoretical physics. By enabling better control and manipulation of quantum states, MEMS could revolutionize how we approach problems in quantum mechanics, from computing to material science. Now that we understand their limitations better, researchers are poised to explore their potential applications, including their role in developing quantum technologies that are robust, scalable, and practical.

The post Open problem in quantum entanglement theory solved after nearly 25 years appeared first on Physics World.

]]>
Research update Non-existence of universal maximally entangled isospectral mixed states has implications for research on quantum technologies https://physicsworld.com/wp-content/uploads/2024/09/entanglement_4132506_iStock_Kngkyle21.jpg newsletter1
Metasurface makes thermal sources emit laser-like light https://physicsworld.com/a/metasurface-makes-thermal-sources-emit-laser-like-light/ Mon, 02 Sep 2024 10:41:54 +0000 https://physicsworld.com/?p=116527 Pillar-studded surface just hundreds of nanometres thick allows researchers to control direction, polarization and phase of thermal radiation

The post Metasurface makes thermal sources emit laser-like light appeared first on Physics World.

]]>
Incandescent light bulbs and other thermal radiation sources can produce coherent, polarized and directed emissions with the help of a structured thin film known as a metasurface. Created by Andrea Alù and colleagues at the City University of New York (CUNY), US, the new metasurface uses a periodic structure with tailored local perturbations to transform ordinary thermal emissions into something more like a laser beam – an achievement heralded as “just the beginning” for thermal radiation control.

Scientists have previously shown that metasurfaces can perform tasks such as wavefront shaping, beam steering, focusing and vortex beam generation that normally require bulky traditional optics. However, these metasurfaces only work with the highly coherent light typically emitted by lasers. “There is a lot of hype around compactifying optical devices using metasurfaces,” says Alù, the founding director of CUNY’s Photonics Initiative. “But people tend to forget that we still need a bulky laser that is exciting them.”

Unlike lasers, most light sources – including LEDs as well as incandescent bulbs and the Sun – produce light that is highly incoherent and unpolarized, with spectra and propagation directions that are hard to control. While it is possible to make thermal emissions coherent, doing so requires special silicon carbide materials, and the emitted light has several shortcomings. Notably, a device designed to emit light to the right will also emit it to the left – a fundamental symmetry known as reciprocity.

Some researchers have argued that reciprocity fundamentally limits how asymmetric the wavefront emitted from such structures can be. However, in 2021 members of Alù’s group showed theoretically that a metasurface could produce coherent thermal emission for any polarization, travelling in any direction, without relying on special materials. “We found that the reciprocity constraint could be overcome with a sufficiently complicated geometry,” Alù says.

Smart workarounds

The team’s design incorporated two basic elements. The first is a periodic array that interacts with the light in a highly non-local way, creating a long-range coupling that forces the random oscillations of thermal emission to become coherent across long time scales and distances. The second element is a set of tailored local perturbations to this periodic structure that make it possible to break the symmetry in emission direction.

The only problem was that this structure proved devilishly difficult to construct, as it would have required aligning two independent nanostructured arrays within a 10 nm tolerance. In the latest work, which is described in Nature Nanotechnology, Alù and colleagues found a way around this by backing one structured film with a thin layer of gold. This metallic backing effectively creates an image of the structure, which breaks the vertical symmetry as needed to realize the effect. “We were surprised this worked,” Alù says.

The final structure was made from silicon and structured as an array of rectangular pillars (for the non-local interactions) interspersed with elliptical pillars (for the asymmetric emission). Using this structure, the team demonstrated coherent directed emission for six different polarizations, at frequencies of their choice. They also used it to send circularly polarized light in arbitrary directions, and to split thermal emissions into orthogonally polarized components travelling in different directions. While this so-called photonic Rashba effect has been demonstrated before in circularly polarized light, the new thermal metasurface produces the same effect for arbitrary polarizations – something not previously thought possible.

According to Alù, the new metasurface offers “interesting opportunities” for lighting, imaging, and thermal emission management and control, as well as thermal camouflaging. George Alexandropoulos, who studies metasurfaces for informatics and telecommunication at the National and Kapodistrian University of Athens, Greece but was not involved in the work, agrees. “Metasurfaces controlling thermal radiation could direct thermal emission to energy-harvesting wireless devices,” he says.

Riccardo Sapienza, a physicist at Imperial College London, UK, who also studies metamaterials and was also not involved in this research, agrees that communication could benefit and suggests that infrared sensing could, too. “This is a very exciting result which brings closer the dream of complete control of thermal radiation,” he says. “I am sure this is just the beginning.”

The post Metasurface makes thermal sources emit laser-like light appeared first on Physics World.

]]>
Research update Pillar-studded surface just hundreds of nanometres thick allows researchers to control direction, polarization and phase of thermal radiation https://physicsworld.com/wp-content/uploads/2024/09/02-09-2024-Thermal-metasurface-artwork.png newsletter1
Researchers cut to the chase on the physics of paper cuts https://physicsworld.com/a/researchers-cut-to-the-chase-on-the-physics-of-paper-cuts/ Sun, 01 Sep 2024 09:00:22 +0000 https://physicsworld.com/?p=116517 A paper cut “sweet spot” just happens to be close to the thickness of paper in print magazines

The post Researchers cut to the chase on the physics of paper cuts appeared first on Physics World.

]]>
If you have ever been on the receiving end of a paper cut, you will know how painful they can be.

Kaare Jensen from the Technical University of Denmark (DTU), however, has found intrigue in this bloody occurrence. “I’m always surprised that thin blades, like lens or filter paper, don’t cut well, which is unexpected because we usually consider thin blades to be efficient,” Jensen told Physics World.

To find out why paper is so successful at cutting skin, Jensen and fellow DTU colleagues carried out over 50 experiments with a range of paper thicknesses to make incisions into a piece of gelatine at various angles.

Through these experiments and modelling, they discovered that paper cuts are a competition between slicing and “buckling”. Thin paper with a thickness of about 30 microns, or 0.03 mm, doesn’t cut so well because it buckles – a mechanical instability that happens when a slender object like paper is compressed. Once this occurs, the paper can no longer transfer force to the tissue, so is unable to cut.

Thick paper, with a thickness greater than around 200 microns, is also ineffective at making an incision. This is because it distributes the load over a greater area, resulting in only small indentations.

The team found, however, a paper cut “sweet spot” at around 65 microns and when the incision was made at an angle of about 20 degrees from the surface. This paper thickness just happens to be close to that of the paper used in print magazines, which goes some way to explain why it annoyingly happens so often.

Using the results from the work, the researchers created a 3D-printed scalpel that uses scrap paper for the cutting edge. Using this so-called “papermachete” they were able to slice through apple, banana peel, cucumber and even chicken.

Jensen notes that the findings are interesting for two reasons. “First, it’s a new case of soft-on-soft interactions where the deformation of two objects intertwines in a non-trivial way,” he says. “Traditional metal knives are much stiffer than biological tissues, while paper is still stiffer than skin but around 100 times weaker than steel.”

The second is that it is a “great way” to teach students about forces given that the experiments are straightforward to do in the classroom. “Studying the physics of paper cuts has revealed a surprising potential use for paper in the digital age: not as a means of information dissemination and storage, but rather as a tool of destruction,” the researchers write.

The post Researchers cut to the chase on the physics of paper cuts appeared first on Physics World.

]]>
Blog A paper cut “sweet spot” just happens to be close to the thickness of paper in print magazines https://physicsworld.com/wp-content/uploads/2024/08/30-08-24-papermachete2-small.jpg newsletter
LUX-ZEPLIN ‘digs deeper’ for dark-matter WIMPs https://physicsworld.com/a/lux-zeplin-puts-new-limit-on-dark-matter-mass/ Sat, 31 Aug 2024 13:09:34 +0000 https://physicsworld.com/?p=116521 Announcement makes us pine for the Black Hills

The post LUX-ZEPLIN ‘digs deeper’ for dark-matter WIMPs appeared first on Physics World.

]]>
This article has been updated to correct a misinterpretation of this null result.  

Things can go a bit off-topic at Physics World and recent news about dark matter got us talking about the beauty of the Black Hills of South Dakota. This region of forest and rugged topography is smack dab in the middle of the Great Plains of North America and is most famous for the giant sculpture of four US presidents at Mount Rushmore.

A colleague from Kansas fondly recalled a family holiday in the Black Hills – and as an avid skier, I was pleased to learn that the region is home to the highest ski lift between the Alps and the Rockies.

The Black Hills also have a special place in the hearts of physicists – especially those who are interested in dark matter and neutrinos. The region is home to the Sanford Underground Research Facility, which is located 1300 m below the hills in a former gold mine. It was there that Ray Davis and colleagues first detected neutrinos from the Sun, for which Davis shared the 2002 Nobel Prize for Physics.

Today, the huge facility is home to nearly 30 experiments that benefit from the mine’s low background radiation. One of the biggest experiments is LUX–ZEPLIN, which is searching for dark-matter particles.

Hypothetical substance

Dark matter is a hypothetical substance that is invoked to explain the dynamics of galaxies, the large-scale structure of the cosmos, and more. While dark matter is believed to account for 85% of mass in the universe, physicists have little understanding of what it is – or indeed if it actually exists.

So far, the best that experiments like LUX–ZEPLIN have done is to tell physicists what dark matter isn’t. Now, the latest result from LUX–ZEPLIN places the best-ever limits on the nature of dark-matter particles called WIMPs.

The measurement involved watching several tonnes of liquid xenon for 280 days, looking for flashes of light that would be created when a WIMP collides with a xenon nuclei. However no evidence was seen for collisions with WIMPs heavier than 9 GeV/c2 – which is about 10 times the mass of the proton.

The team says that the result is “nearly five times better” than previous WIMP searches. “These are new world-leading constraints by a sizable margin on dark matter and WIMPs,” explains Chamkaur Ghag, who speaks for the LUX–ZEPLIN team and is based at University College London.

Digging for treasure

“If you think of the search for dark matter like looking for buried treasure, we’ve dug almost five times deeper than anyone else has in the past,” says Scott Kravitz of the University of Texas at Austin who is the deputy physics coordinator for the experiment.

This will not be the last that we hear from LUX–ZEPLIN, which will collect a total of 1000 days of data before it switches off in 2028. And it’s not only dark matter that the experiment is looking for. Because it is in a low background environment, LUX–ZEPLIN is also being used to search for other rare or hypothetical events such as the radioactive decay of xenon, neutrinoless double beta decay and neutrinos from the beta decay of boron nuclei in the Sun.

LUX–ZEPLIN is not the only experiment at Sanford that is looking for neutrinos. The Deep Underground Neutrino Experiment (DUNE) is currently under construction at the lab and is expected to be completed in 2028. DUNE will detect neutrinos in four huge tanks that will each be filled with 17,000 tonnes of liquid argon. Some neutrinos will be beamed from 1300 km away at Fermilab near Chicago and together the facilities will comprise the Long-Baseline Neutrino Facility.

One aim of the facility is to study the flavour oscillation of neutrinos as they travel over long distances. This could help explain why there is much more matter than antimatter in the universe. By detecting neutrinos from exploding stars, DUNE could also shed light on the nuclear processes that occur during supernovae. And, it might even detect the radioactive decay of the proton, a hypothetical process that could point to physics beyond the Standard Model.

The post LUX-ZEPLIN ‘digs deeper’ for dark-matter WIMPs appeared first on Physics World.

]]>
Blog Announcement makes us pine for the Black Hills https://physicsworld.com/wp-content/uploads/2024/08/31-8-24-Sanford-vista.jpg newsletter
Gold nanoparticles could improve radiotherapy of pancreatic cancer https://physicsworld.com/a/gold-nanoparticles-could-improve-radiotherapy-of-pancreatic-cancer/ Fri, 30 Aug 2024 09:49:16 +0000 https://physicsworld.com/?p=116502 Irradiating tumours containing gold nanoparticles should enhance radiotherapy effectiveness while minimizing potential side effects

The post Gold nanoparticles could improve radiotherapy of pancreatic cancer appeared first on Physics World.

]]>
Dose distributions for pancreatic radiotherapy

The primary goal of radiotherapy is to effectively destroy the tumour while minimizing side effects to nearby normal tissues. Focusing on the challenging case of pancreatic cancer, a research team headed up at Toronto Metropolitan University in Canada has demonstrated that gold nanoparticles (GNPs) show potential to optimize this fine balance between tumour control probability (TCP) and normal tissue complication probability (NTCP).

GNPs are under scrutiny as candidates for improving the effectiveness of radiation therapy by enhancing dose deposition within the tumour. The dose enhancement observed when irradiating GNP-infused tumour tissue is mainly due to the Auger effect, in which secondary electrons generated within the nanoparticles can damage cancer cells.

“Nanoparticles like GNPs could be delivered to the tumour using targeting agents such as [the cancer drug] cetuximab, which can specifically bind to the epidermal growth factor receptor expressed on pancreatic cancer cells, ensuring a high concentration of GNPs in the tumour site,” says first author Navid Khaledi, now at CancerCare Manitoba.

This increased localized energy deposition should improve tumour control; but it’s also crucial to consider possible toxicity to normal tissues due to the presence of GNPs. To investigate this further, Khaledi and colleagues simulated treatment plans for five pancreatic cancer cases, using CT images from the Cancer Imaging Archive database.

Plan comparison

For each case, the team compared plans generated using a 2.5 MV photon beam in the presence of GNPs with conventional 6 MV plans. “We chose a 2.5 MV beam due to the enhanced photoelectric effect at this energy, which increases the interaction probability between the beam and the GNPs,” Khaledi explains.

The researchers created the treatment plans using the MATLAB-based planning program matRad. They first determined the dose enhancement conferred by 50-nm diameter GNPs by calculating the relative biological effectiveness (RBE, the ratio of dose without to dose with GNPs for equal biological effects) using custom MATLAB codes. The average RBE for the 2.5 MV beam, using α and β radiosensitivity values for pancreatic tumour, was 1.19. They then applied RBE values to each tumour voxel to calculate dose distributions and TCP and NTCP values.

The team considered four treatment scenarios, based on a prescribed dose of 40 Gy in five fractions: 2.5 MV plus GNPs, designed to increase TCP (using the prescribed dose, but delivering an RBE-weighted dose of 40 Gy x 1.19); 2.5 MV plus GNPs, designed to reduce NTCP (lowering the prescribed dose to deliver an RBE-weighted dose of 40 Gy); 6 MV using the prescribed dose; and 6 MV with the prescribed dose increased to 47.6 Gy (40 Gy x 1.19).

The analysis showed that the presence of GNPs significantly increased TCP values, from around 59% for the standard 6 MV plans to 93.5% for the 2.5 MV plus GNPs (increased TCP) plans. Importantly, the GNPs helped to maintain low NTCP values of below 1%, minimizing the risk of complications in normal tissues. Using a conventional 6 MV beam with an increased dose also resulted in high TCP values, but at the cost of raising NTCP to 27.8% in some cases.

Minimizing risks

The team next assessed the dose to the duodenum, the main dose-limiting organ for pancreatic radiotherapy. The mean dose to the duodenum was highest for the increased-dose 6 MV photon beam, and lowest for the 2.5 MV plus GNPs plans. Similarly, D2%, the maximum dose received by 2% of the volume, was highest with the increased-dose 6 MV beam, and lowest with 2.5 MV plus GNPs.

It’s equally important to consider dose to the liver and kidney, as these organs may also uptake GNPs. The analysis revealed relatively low doses to the liver and left kidney for all treatment options, with mean dose and D2% generally below clinically significant thresholds. The highest mean doses to the liver and left kidney for 2.5 MV plus GNPs were 3.3 and 7.7 Gy, respectively, compared with 2.3 and 8 Gy for standard 6 MV photons.

The researchers conclude that the use of GNPs in radiation therapy has potential to significantly improve treatment outcomes and benefit cancer patients. Khaledi notes, however, that although GNPs have shown promise in preclinical studies and animal models, they have not yet been tested for radiotherapy enhancement in human subjects.

Next, the team plans to investigate new linac targets that could potentially enable therapeutic applications. “One limitation of the current 2.5 MV beam is its low dose rate (60 MU/min) on TrueBeam linacs, primarily due to the copper target’s heat tolerance,” Khaledi tells Physics World. “Increasing the dose rate could make the beam clinically useful, but it risks melting the copper target. Future work will evaluate the beam spectrum for different target designs and materials.”

The researchers report their findings in Physics in Medicine & Biology.

The post Gold nanoparticles could improve radiotherapy of pancreatic cancer appeared first on Physics World.

]]>
Research update Irradiating tumours containing gold nanoparticles should enhance radiotherapy effectiveness while minimizing potential side effects https://physicsworld.com/wp-content/uploads/2024/08/30-08-24-PMB-GNP-featured.jpg newsletter1
The Wow! signal: did a telescope in Ohio receive an extraterrestrial communication in 1977? https://physicsworld.com/a/the-wow-signal-did-a-telescope-in-ohio-receive-an-extraterrestrial-communication-in-1977/ Thu, 29 Aug 2024 14:37:42 +0000 https://physicsworld.com/?p=116495 This podcast features an astrobiologist who has identified similar radio signals

The post The Wow! signal: did a telescope in Ohio receive an extraterrestrial communication in 1977? appeared first on Physics World.

]]>
On 15 August 1977 the Big Ear radio telescope in the US was scanning the skies in a search for signs of intelligent extraterrestrial life. Suddenly, it detected a strong, narrow bandwidth signal that lasted a little longer than one minute – as expected if Big Ear’s field of vision swept across a steady source of radio waves. That source, however, had vanished 24 hours later when the Ohio-based telescope looked at the same patch of sky.

This was the sort of technosignature that searches for extraterrestrial intelligence (SETI) were seeking. Indeed, one scientist wrote the word “Wow!” next to the signal on a paper print-out of the Big Ear data.

Ever since, the origins of the Wow! signal have been debated – and now, a trio of scientists have an astrophysical explanation that does not involve intelligent extraterrestrials. One of them, Abel Méndez, is our guest in this episode of the Physics World Weekly podcast.

Méndez is an astrobiologist at the University of Puerto Rico at Arecibo and he explains how observations made at the Arecibo Telescope have contributed to the trio’s research.

  • Abel Méndez, Kevin Ortiz Ceballos and Jorge I Zuluaga describe their research in a preprint on arXiv.

The post The Wow! signal: did a telescope in Ohio receive an extraterrestrial communication in 1977? appeared first on Physics World.

]]>
Podcasts This podcast features an astrobiologist who has identified similar radio signals https://physicsworld.com/wp-content/uploads/2024/08/29-8-24-Wow-signal.jpg newsletter
Heavy exotic antinucleus gives up no secrets about antimatter asymmetry https://physicsworld.com/a/heavy-exotic-antinucleus-gives-up-no-secrets-about-antimatter-asymmetry/ Thu, 29 Aug 2024 13:08:31 +0000 https://physicsworld.com/?p=116491 Antihyperhydrogen-4 is observed by the Star Collaboration

The post Heavy exotic antinucleus gives up no secrets about antimatter asymmetry appeared first on Physics World.

]]>
An antihyperhydrogen-4 nucleus – the heaviest antinucleus ever produced – has been observed in heavy ion collisions by the STAR Collaboration at Brookhaven National Laboratory in the US. The antihypernucleus contains a strange quark, making it a heavier cousin of antihydrogen-4. Physicists hope that studying such antimatter particles could shed light on why there is much more matter than antimatter in the visible universe – however in this case, nothing new beyond the Standard Model of particle physics was observed.

In the first millionth of a second after the Big Bang, the universe is thought to have been too hot for quarks to have been bound into hadrons. Instead it comprised a strongly interacting fluid called a quark–gluon plasma. As the universe expanded and cooled, bound baryons and mesons were created.

The Standard Model forbids the creation of matter without the simultaneous creation of antimatter, and yet the universe appears to be made entirely of matter. While antimatter is created by nuclear processes – both naturally and in experiments – it is swiftly annihilated on contact with matter.

The Standard Model also says that matter and antimatter should be identical after charge, parity and time are reversed. Therefore, finding even tiny asymmetries in how matter and antimatter behave could provide important information about physics beyond the Standard Model.

Colliding heavy ions

One way forward is to create quark–gluon plasma in the laboratory and study particle–antiparticle creation. Quark–gluon plasma is made by smashing together heavy ions such as lead or gold. A variety of exotic particles and antiparticles emerge from these collisions. Many of them decay almost immediately, but their decay products can be detected and compared with theoretical predictions.

Quark–gluon plasma can include hypernuclei, which are nuclei containing one or more hyperons. Hyperons are baryons containing one or more strange quarks, making hyperons the heavier cousins of protons and neutrons. These hypernuclei are thought to have been present in the high-energy conditions of the early universe, so physicists are keen to see if they exhibit any matter/antimatter asymmetries.

In 2010, the STAR collaboration unveiled the first evidence of an antihypernucleus, which was created by smashing gold nuclei together at 200 GeV. This was the antihypertriton, which is the antimatter version of an exotic counterpart to tritium in which one of the down quarks in one of the neutrons is replaced by a strange quark.

Now, STAR physicists have created a heavier antihypernucleus. They recorded over 6 billion collisions using pairs of uranium, ruthenium, zirconium and gold ions moving at more than 99.9% of the speed of light. In the resulting quark–gluon plasma, the researchers found evidence of antihyperhydrogen-4 (antihypertriton with an extra antineutron). Antihyperhydrogen-4 decays almost immediately by the emission of a pion, producing antihelium-4. This was detected by the researchers in 2011. The researchers therefore knew what to look for among the debris of their collisions.

Sifting through the collisions

Sifting through the collision data, the researchers found 22 events that appeared to be antihyperhydrogen-4 decays. After subtracting the expected background, they were left with approximately 16 events, which was statistically significant enough to claim that they had observed antihyperhydrogen-4.

The researchers also observed evidence of the decays of hyperhydrogen-4, antihypertriton and hypertriton. In all cases, the results were consistent with the predictions of charge–parity–time (CPT) symmetry. This is a central tenet of modern physics that says that if the charge and internal quantum numbers of a particle are reversed, the spatial co-ordinates are reversed and the direction of time is reversed, the outcome of an experiment will be identical.

STAR member Hao Qiu of the Institute of Modern Physics at the Chinese Academy of Sciences says that, in his view, the most important feature of the work is the observation of the hyperhydrogen-4. “In terms of the CPT test, it’s just that we’re able to do it…The uncertainty is not very small compared with some other tests.”

Qiu says that he, personally, hopes the latest research may provide some insight into violation of charge–parity symmetry (i.e. without flipping the direction of time). This has already been shown to occur in some systems. “Ultimately, though, we’re experimentalists – we look at all approaches as hard as we can,” he says; “but if we see CPT symmetry breaking we have to throw out an awful lot of current physics.”

“I really do think it’s an incredibly impressive bit of experimental science,” says theoretical nuclear physicist Thomas Cohen of University of Maryland, College Park; “The idea that they make thousands of particles each collision, find one of these in only a tiny fraction of these events, and yet they’re able to identify this in all this really complicated background – truly amazing!”

He notes, however, that “this is not the place to look for CPT violation…Making precision measurements on the positron mass versus the electron mass or that of the proton versus the antiproton is a much more promising direction simply because we have so many more of them that we can actually do precision measurements.”    

The research is described in Nature.

The post Heavy exotic antinucleus gives up no secrets about antimatter asymmetry appeared first on Physics World.

]]>
Research update Antihyperhydrogen-4 is observed by the Star Collaboration https://physicsworld.com/wp-content/uploads/2024/08/29-8-24-antihypernucleus.jpg newsletter1
Metamaterial gives induction heating a boost for industrial processing https://physicsworld.com/a/metamaterial-gives-induction-heating-a-boost-for-industrial-processing/ Wed, 28 Aug 2024 15:05:27 +0000 https://physicsworld.com/?p=116469 Technology could help heavy industry transition from fossil fuels

The post Metamaterial gives induction heating a boost for industrial processing appeared first on Physics World.

]]>
A thermochemical reactor powered entirely by electricity has been unveiled by Jonathan Fan and colleagues at Stanford University. The experimental reactor was used to convert carbon dioxide into carbon monoxide with close to 90% efficiency. This makes it a promising development in the campaign to reduce carbon dioxide emissions from industrial processes that usually rely on fossil fuels.

Industrial processes account for a huge proportion of carbon emissions worldwide – accounting for roughly a third of carbon emissions in the US, for example. In part, this is because many industrial processes require huge amounts of heat, which can only be delivered by burning fossil fuels. To address this problem, a growing number of studies are exploring how combustion could be replaced with electrical sources of heat.

“There are a number of ways to use electricity to generate heat, such as through microwaves or plasma,” Fan explains. “In our research, we focus on induction heating, owing to its potential for supporting volumetric heating at high power levels, its ability to scale to large power levels and reactor volumes, and its strong safety record.”

Induction heating uses alternating magnetic fields to induce electric currents in a conductive material, generating heat via the electrical resistance of the material. It is used in a wide range of applications from domestic cooking to melting scrap metal. However, it has been difficult to use induction heating for complex industrial applications.

In its study, Fan’s team focused on using inductive heating in thermochemical reactors, where gases are transformed into valuable products through reactions with catalysts.

Onerous requirements

The heating requirements for these reactors are especially onerous, as Fan explains. “They need to produce heat in a 3D space; they need to feature exceptionally high heat transfer rates from the heat-absorbing material to the catalyst; and the energy efficiency of the process needs to be nearly 100%.”

To satisfy these requirements, the Stanford researchers created a new design for internal reactor structures called baffles. Conventional baffles are used to enhance heat transfer and mixing within a reactor, improving its reaction rates and yields.

In their design, Fan’s team re-reimagined these structures as integral components of the heating process itself. Their new baffles comprised a 3D lattice made from a conductive ceramic, which can be heated via magnetic induction at megahertz frequencies.

“The lattice structure can be modelled as a medium whose electrical conductivity depends on both the material composition of the ceramic and the geometry of the lattice,” Fan explains. “Therefore, it can be conceptualized as a metamaterial, whose physical properties can be tailored via their geometric structuring.”

 Encouraging heat transfer

This innovative design addressed three key requirements of a thermochemical reactor. First, by occupying the entire reactor volume, it ensures uniform 3D heating. Second, the metamaterial’s large surface area encourages heat transfer between the lattice and the catalyst. Finally, the combination of the high induction frequency and low electrical conductivity in the lattice delivers high energy efficiency.

To demonstrate these advantages, Fan says, “we tailored the metamaterial reactor for the ‘reverse water gas shift’ reaction, which converts carbon dioxide into carbon monoxide – a useful chemical for the synthesis of sustainable fuels”.

To boost the efficiency of the conversion, the team used a carbonate-based catalyst to minimize unwanted side reactions. A silicon carbide foam lattice baffle and a novel megahertz-frequency power amplifier were also used.

As Fan explains, initial experiments with the reactor yielded very promising results. “These demonstrations indicate that our reactor operates with electricity to internal heat conversion efficiencies of nearly 90%,” he says.

The team hopes that its design offers a promising step towards electrically powered thermochemical reactors that are suited for a wide range of useful chemical processes.

“Our concept could not only decarbonize the powering of chemical reactors but also make them smaller and simpler,” Fan says. “We have also found that as our reactor concept is scaled up, its energy efficiency increases. These implications are important, as economics and ease of implementation will dictate how quickly decarbonized reactor technologies could translate to real-world practice.”

The research is described in Joule.

The post Metamaterial gives induction heating a boost for industrial processing appeared first on Physics World.

]]>
Research update Technology could help heavy industry transition from fossil fuels https://physicsworld.com/wp-content/uploads/2024/08/28-08-24-metamaterial-reactor.jpg newsletter1
‘Kink states’ regulate the flow of electrons in graphene https://physicsworld.com/a/kink-states-regulate-the-flow-of-electrons-in-graphene/ Wed, 28 Aug 2024 12:00:11 +0000 https://physicsworld.com/?p=116456 New valleytronics-based switch could have applications in quantum networks

The post ‘Kink states’ regulate the flow of electrons in graphene appeared first on Physics World.

]]>
A new type of switch sends electrons propagating in opposite directions along the same paths – without ever colliding with each other. The switch works by controlling the presence of so-called topological kink states in a material known as Bernal bilayer graphene, and its developers at Penn State University in the US say that it could lead to better ways of transmitting quantum information.

Bernal bilayer graphene consists of two atomically-thin sheets of carbon stacked on top of each other and shifted slightly. This arrangement gives rise to several unusual electronic behaviours. One such behaviour, known as the quantum valley Hall effect, gets its name from the dips or “valleys” that appear in graphs of an electron’s energy relative to its momentum. Because graphene’s conduction and valence bands meet at discrete points (known as Dirac points), it has two such valleys. In the quantum valley Hall effect, the electrons in these different valleys flow in opposite directions. Hence, by manipulating the population of the valleys, researchers can alter the flow of electrons through the material.

This process of controlling the flow of electrons via their valley degree of freedom is termed “valleytronics” by analogy with spintronics, which uses the internal degree of freedom of electron spin to store and manipulate bits of information. For valleytronics to be effective, however, the materials the electrons flow through need to be of very high quality. This is because any atomic defects can produce intervalley backscattering, which causes electrons travelling in opposite directions to collide with each other.

A graphite/hBN global gate

Researchers led by Penn State physicist Jun Zhu have now succeeded in producing a device that is pristine enough to support such behaviour. They did this by incorporating a stack made from graphite and a two-dimensional material called hexagonal boron nitride (hBN) into their design. This stack, which acts as a global “gate” that allows electrons to flow through the device, is free of impurities, and team member Ke Huang explains that it was key to the team’s technical advance.

The principle behind the improvement is that while graphite is an excellent electrical conductor, hBN is an insulator. By combining the two materials, Zhu, Huang and colleagues created a structure known as a topological insulator – a material that conducts electricity very well along its edges or surfaces while acting as an insulator in its bulk. Within the edge states of such a topological insulator, electrons can only travel along one pathway. This means that, unlike in a normal conductor, they do not experience backscatter. This remarkable behaviour allows topological insulators to carry electrical current with near-zero dissipation.

In the present work, which is described in Science, the researchers confined electrons to special, topologically protected electrically conducting pathways known as kink states that formed by electrically gating the stack. By controlling the presence or absence of these states, they showed that they could regulate the flow of electrons in the system.

A quantized resistance value

“The amazing thing about our devices is that we can make electrons moving in opposite directions not collide with one another even though they share the same pathways,” Huang says. “This corresponds to the observation of a quantized resistance value, which is key to the potential application of the kink states as quantum wires to transmit quantum information.”

Importantly, this quantization of the kink states persists even when the researchers increased the temperature of the system from near absolute zero to 50 K. Zhu describes this as surprising because quantum states are fragile, and often only exist at temperatures of a few Kelvin. Operation at elevated temperatures will, of course, be important for real-world applications, she adds.

The new switch is the latest addition to a group of kink state-based quantum electronic devices the team has already built. These include valves, waveguides and beamsplitters. While the researchers admit that they have a long way to go before they can assemble these components into a fully functioning quantum interconnect system, they say their current set-up is potentially scalable and can already be programmed to direct current flow. They are now planning to study how electrons behave like coherent waves when travelling along the kink state pathways. “Maintaining quantum coherence is a key requirement for any quantum interconnect,” Zhu tells Physics World.

The post ‘Kink states’ regulate the flow of electrons in graphene appeared first on Physics World.

]]>
Research update New valleytronics-based switch could have applications in quantum networks https://physicsworld.com/wp-content/uploads/2024/08/graphene-web-206705884_Shutterstock_Inozemtsev-Konstantin.jpg newsletter1
A breezy tour of what gaseous materials do for us https://physicsworld.com/a/a-breezy-tour-of-what-gaseous-materials-do-for-us/ Wed, 28 Aug 2024 10:00:50 +0000 https://physicsworld.com/?p=116336 Margaret Harris reviews It’s a Gas: the Magnificent and Elusive Elements that Expand Our World by Mark Miodownik

The post A breezy tour of what gaseous materials do for us appeared first on Physics World.

]]>
A row of gas lamps outside the Louvre in Paris

The first person to use gas for illumination was a French engineer by the name of Philippe Lebon. In 1801 his revolutionary system of methane pipes and jets lit up the Hôtel de Seignelay so brilliantly that ordinary Parisians paid three francs apiece just to marvel at it. Overnight guests may have been less enthusiastic. Although methane itself is colourless and odourless, Lebon’s process for extracting it left the gas heavily contaminated with hydrogen sulphide, which – as Mark Miodownik cheerfully reminds us in his latest book – is a chemical that “smells of farts”.

The often odorous and frequently dangerous world of gases is a fascinating subject for a popular-science book. It’s also a logical one for Miodownik, a materials researcher at University College London, UK, whose previous books were about solids and liquids. The first, Stuff Matters, was a huge critical and commercial success, winning the 2014 Royal Society Winton Prize for science books (and Physics World’s own Book of the Year award) on its way to becoming a New York Times bestseller. The second, Liquid, drew more muted praise, with some critics objecting to a narrative gimmick that shoehorned liquid-related facts into the story of a hypothetical transatlantic flight.

Miodownik writes about the science of substances such as breath, fragrance and wind as well as methane, hydrogen and other gases with precise chemical formulations

Miodownik’s third book It’s a Gas avoids this artificial structure and is all the better for it. It also adopts a very loose definition of “gas”, which leaves Miodownik free to write about the science of substances such as breath, fragrance and wind as well as methane, hydrogen and other gases with precise chemical formulations. The result is a lively, free-associating mixture of personal, scientific and historical anecdotes very reminiscent of Stuff Matters, though inevitably one that feels less exceptional than it did the first time around.

The chapter on breath shows how this mixture works. It begins with a story about the young Miodownik watching a brass band march past. Next, we get an explanation of how air travels through brass instruments. By the end of the chapter, Miodownik has moved on, via Air Jordan sneakers and much else, to pneumatic bicycle tyres and their surprising impact on English genetic diversity.

Though the connection might seem fanciful at first, it seems that after John Dunlop patented his air-filled rubber bicycle tyre in 1888, many people (especially women) were suddenly able to traverse bumpy roads cheaply, comfortably and without assistance. As their horizons expanded, their inclination to marry someone from the same parish plummeted: between 1887 and the early years of the 20th century, marriages of this nature dropped from 77% to 41% of the total.

Miodownik is not the first to make the link between bicycle tyres and longer-distance courtships. (He credits the geneticist Steve Jones for the insight, building on work by the 20th-century geographer P J Parry.) However, his decision to include the tale is a deft one, as it illustrates just how important gases and their associated technologies have been to human history.

Anaesthetics are another good example. Though medical professionals were scandalously slow to accept nitrous oxide, ether and chloroform, these beneficial gases eventually revolutionized surgery, saving millions of patients from the agony of their predecessors. Interestingly, criminals proved far less hide-bound than doctors, swiftly adopting chloroform as a way of subduing victims – though the ever-responsible Miodownik notes that this tactic seldom works as quickly as it does in the movies, and errors in dosage can be fatal.

Not every gas-related invention had such far-reaching effects. Inflatable mattresses never really caught on; as Miodownik observes, “beds were for sleeping and sex, and neither was enhanced by being unexpectedly launched into the air every time your partner made a move”.

The history of balloons is similarly chequered. Around the same time as Lebon was filling the Hôtel de Seignelay with aromas, an early balloonist, Sophie Blanchard, was appointed Napoleon’s “aeronaut of the official festivals”. Though Blanchard went on to hold a similar post under the restored King Louis XVIII, Miodownik notes that her favourite party trick – launching fireworks from a balloon filled with highly flammable and escape-prone hydrogen – eventually caught up with her. In 1819, aged just 41, her firework-festooned craft crashed into the roof of a house and Blanchard fell to her death.

Miodownik brings a pleasingly childlike wonder to his tales of gaseous derring-do

The lessons of this calamity were not learned. More than a century later, 35 passengers and crew on the hydrogen-filled Hindenburg airship (which included a smoking area among its many luxuries) met a similarly fiery end.

Occasional tragedies aside, Miodownik brings a pleasingly childlike wonder to his tales of gaseous derring-do. He often opens chapters with stories from his actual childhood, and while a few of these (like the brass band) are merely cute, others are genuinely jaw-dropping. Some readers may recall that Miodownik began Stuff Matters by describing the time he got stabbed on the London Underground; while there is nothing quite so dramatic in It’s a Gas (and no spoilers in this review), he clearly had an eventful youth.

At times, it becomes almost a game to guess which gas these opening anecdotes will lead to. Though some readers may find the connections a little tenuous, Miodownik is a good enough writer to make his leaps of logic seem effortless even when they are noticeable. The result is a book as delightfully light as its subject matter, and a worthy conclusion to Miodownik’s informal phases-of-matter trilogy – although if he wants to write about plasmas next, I certainly won’t stop him.

  • 2024 Viking 304pp £22.00hb

The post A breezy tour of what gaseous materials do for us appeared first on Physics World.

]]>
Opinion and reviews Margaret Harris reviews It’s a Gas: the Magnificent and Elusive Elements that Expand Our World by Mark Miodownik https://physicsworld.com/wp-content/uploads/2024/08/2024-08-Harris_gaslamp_feature.jpg newsletter
Free-space optical communications with FPGA-based instrumentation https://physicsworld.com/a/free-space-optical-communications-with-fpga-based-instrumentation/ Wed, 28 Aug 2024 09:03:13 +0000 https://physicsworld.com/?p=116418 Join the audience for a live webinar on 2 October 2024 sponsored by Liquid Instruments

The post Free-space optical communications with FPGA-based instrumentation appeared first on Physics World.

]]>

As the world becomes more connected by global communications networks, the field of free-space optical communications has grown as an alternative to traditional data transmission via RF frequencies. While optical communications setups deliver scalability and security advantages along with a smaller infrastructure footprint, they also bring distinct challenges, including attenuation, interference, and beam divergence.

During this presentation, Liquid Instruments will give an overview of the FPGA-based Moku platform, a reconfigurable suite of test and measurement instruments that provide a flexible and efficient approach to optical communications development. You’ll learn how to use the Moku Lock-in Amplifier and Time & Frequency Analyzer for both coherent and direct detection of optical signals, as well as how to frequency-stabilize lasers with the Laser Lock Box.

You’ll also see how to deploy these instruments simultaneously in Multi-instrument Mode for maximum versatility, plus digital and analog modulation methods such as phase-shift keying (PSK) and pulse-position modulation (PPM) covered in a live demo.

A Q&A session will follow the demonstration.

Jason Ball is an engineer at Liquid Instruments, where he focuses on applications in quantum physics, particularly quantum optics, sensing, and computing. He holds a PhD in physics from the Okinawa Institute of Science and Technology and has a comprehensive background in both research and industry, with hands-on experience in quantum computing, spin resonance, microwave/RF experimental techniques, and low-temperature systems.

The post Free-space optical communications with FPGA-based instrumentation appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 2 October 2024 sponsored by Liquid Instruments https://physicsworld.com/wp-content/uploads/2024/08/2024-10-02-webinar-image.jpg
Management insights catalyse scientific success https://physicsworld.com/a/management-insights-catalyse-scientific-success/ Wed, 28 Aug 2024 09:00:38 +0000 https://physicsworld.com/?p=114112 Effective management training can equip scientists and engineers with powerful tools to boost the impact of their work, identify opportunities for innovation, and build high-performing teams

The post Management insights catalyse scientific success appeared first on Physics World.

]]>
Most scientific learning is focused on gaining knowledge, both to understand fundamental concepts and to master the intricacies of experimental tools and techniques. But even the most qualified scientists and engineers need other skills to build a successful career, whether they choose to continue in academia, pursue different pathways in the industrial sector, or exploit their technical prowess to create a new commercial enterprise.

“Scientists and engineers can really benefit from devoting just a small amount of time, in the broad scope of their overall professional development, to understanding and implementing some of the ideas from management science,” says Peter Hirst, who originally trained as a physicist at the University of St Andrews in the UK and now leads the executive education programme at MIT’s Sloan School of Management. “Whether you’re running a lab with just a few post-docs, or you have a leadership role in a large organization, a few simple tools can help to drive innovation and creativity while also making your team more effective and efficient.”

MIT Sloan Executive Education, part of the management school, the business school of the Massachusetts Institute of Technology in Cambridge, US, offers more than 100 short courses and programmes covering all aspects of business innovation, personal skills development, and organizational management, many of which can be accessed in different online formats. Delivered by expert faculty who can share their experiences and insights from their own research work, they are designed to introduce frameworks and tools that enable participants to apply key concepts from management science to real-world situations.

Research groups are really a type of enterprise, with people working together to produce clearly defined outputs

Peter Hirst, MIT Sloan School of Management

One obvious example is the process of transforming a novel lab-based technology into a compelling commercial proposition. “Lots of scientists develop intellectual property during their research work, but may not be aware of the opportunities for commercialization,” says Hirst. “Even here at MIT, which is known for its culture of innovation, many researchers don’t realize that educational support is available to help them to understand what’s needed to transfer a new technology into a viable product, or even to become more aware of what might be possible.”

For academic researchers who want to remain focused on the science, Hirst believes that management tools originally developed in the business sector can offer valuable support to help build more effective teams and nurture the talents of diverse individuals. “Research groups are really a type of enterprise, with people working together to produce clearly defined outputs,” he says. “When I was working as a scientist, I really didn’t really think about the human system that was doing that work, but that’s a really important dimension that can contribute to the success or failure of the whole enterprise.”

Modern science also depends on forging successful collaborations between research groups, or between academia and industry, while researchers are under mounting pressure to demonstrate the impact of their work – whether for scientific progress or commercial benefit. “Even if you’re working in academia, it’s really important to understand the contribution that your work is making to the whole value chain,” Hirst comments. “It provides context that helps to guide the work, but it’s also vital for sustainably securing the resources that are needed to pursue the science.”

The training offered by MIT Sloan takes different formats, including short courses and longer programmes that take a deeper dive into key topics. In each case, however, the faculty designs tasks, simulations and projects that allow participants to gain a deeper understanding of key concepts and how they might be exploited in their own workplace. “People believe by seeing, but they learn by doing,” says Hirst. “Our guiding philosophy is that the learning is always more effective if it can be done in the context of real work, real problems, and real challenges.”

Business team

Many of the courses are taught on the MIT campus, offering the opportunity for delegates to discuss key ideas, work together on training tasks, and network with people who have different backgrounds and experience. For those unable to attend in person, the same ethos extends to the two types of online training available through the executive education programme. One stream, developed in response to the Covid pandemic, offers live tutoring through the Zoom platform, while the other provides access to pre-recorded digital programmes that participants complete within a set time window. Some of these self-paced courses adopt a sprint format inspired by the concepts of agile product development, enabling participants to break down a complex challenge or opportunity into a series of smaller questions that can be tackled to reach a more effective solution.

“It’s not just sitting and watching, people really have the opportunity to work with the material and apply what they are learning,” explains Hirst. “In each case we have worked hard with the faculty to figure out how to achieve the same outcomes through a different type of experience, and it’s been great to see how compelling that can be.”

Evidence that the approach is working can be found in the retention rate for the self-paced courses, with more than 90% of participants completing all the modules and assignments. The Zoom-based programmes also remain popular amid the more general post-pandemic return to in-person training, providing greater flexibility for learners in different parts of the world. “We have tried to find the sweet spot between effectiveness and accessibility, and many people who can’t come to campus have told us they find these courses valuable and impactful,” says Hirst. “We have put the choice in the hands of the learners.”

Plenty of scientists and engineers have already taken the opportunity to develop their management capabilities through the courses offered by MIT Sloan, particularly those that have been thrown into leadership positions within a rapidly growing organization. “Perhaps because we’re at MIT, we are already seeing scientists and engineers who recognize the value of engaging with ideas and tools that some people might dismiss as corporate nonsense,” says Hirst. “Generally speaking, they have really great experiences and discover new approaches that they can use in their labs and businesses to improve their own work and that of their teams and organizations.”

For those who may not yet be ready to make the leap into developing their personal management style, Hirst advocates courses that analyse the dynamics of an organization – whether it’s a start-up company, a manufacturing business or a research collaboration. The central idea here is to apply concepts from systems engineering to organizations, and how work gets done by a human system, to improve overall productivity and performance.

One case study that Hirst cites from the biomedical sector is the Broad Institute, a research organization with links to MIT and Harvard that has developed a platform for generating human genomic information. “Originally they were taking months to extract the genomic data from a sample, but they have reduced that to a week by implementing some fairly simple ideas to manage their operational processes,” he says. “It’s a great example of a scientific organization that has used systems-based thinking to transform their business.”

Others may benefit from courses that focus on technology development and product strategy, or an entrepreneurship development programme that immerses participants in the process of creating a successful business from a novel idea or technology. “That programme can be transformational for many people,” Hirst observes. “Most people who come into it with a background in science and engineering are focused on demonstrating the technical superiority of their solution, but one of the big lessons is the importance of understanding the needs of the customer and the value they would derive from implementing the technology.”

For those who are keen to develop their skills in one particular area, MIT Sloan also offers a series of Executive Certificates that enable learners to choose four complementary courses focusing on topics such as strategy and innovation, or technology and operations. Once all four courses in the track have been completed – which can be achieved in just a few weeks as well as over several months or years – participants are awarded an Executive Certificate to demonstrate the commitment they have made to their own personal development.

More information can be found in a digital brochure that provides details all of the courses available through MIT Sloan, while the website for the executive education programme provides an easy way to search for relevant courses and programmes. Hirst also recommends reading the feedback and reviews from previous participants, which appear alongside each course description on the website. “Prospective learners find it really useful to see how people in similar situations, or with similar needs, have described their experience.”

The post Management insights catalyse scientific success appeared first on Physics World.

]]>
Analysis Effective management training can equip scientists and engineers with powerful tools to boost the impact of their work, identify opportunities for innovation, and build high-performing teams https://physicsworld.com/wp-content/uploads/2024/04/Business-woman-sitting-front-laptop-1382663132-shutterstock-SFIO-CRACHO-web.jpg newsletter
Sunflowers ‘dance’ together to share sunlight https://physicsworld.com/a/sunflowers-dance-together-to-share-sunlight/ Tue, 27 Aug 2024 14:34:41 +0000 https://physicsworld.com/?p=116449 Zigzag patterns created by circular motion of growing stems

The post Sunflowers ‘dance’ together to share sunlight appeared first on Physics World.

]]>
Yasmine Meroz

Sunflowers in a field can co-ordinate the circular motions of their growing stems to minimize the amount of shade each plant experiences – a study done in the US and Israel has revealed. By doing a combination of experiments and simulations, a team led by Yasmine Meroz at Tel Aviv University discovered that seemingly random movements within groups of plants can lead to self-organizing patterns that optimize growing conditions.

Unlike animals, plant motion is usually related to growth – which is an irreversible process that defines a plant’s morphology. One movement frequently observed in plants is called circumnutation, which describes repeating, circular motions at the tips of growing plant stems.

“Charles Darwin and his son, Francis, already identified circumnutations in their book, The Power of Movement in Plants, in 1880,” Meroz explains. “While they documented these movements in a number of species, it was not clear whether these have a function. It is only in recent years that some research has started to identify possible roles of circumnutations, such as the ability of roots to circumvent obstacles.”

Understanding self-organization

Circumnutation was not the initial focus of the team’s study. Instead, they sought a deeper understanding of self-organization. This is a process whereby a system that start outs in a disorderly state can gain order through local interactions between its individual components.

In nature, self-organization has been widely studied in groups of animals, including fish, birds, and insects. The coordinated movements of many individuals help animals source food, evade predators, and conserve energy.

But in 2017 a fascinating example of self-organization in plants was discovered by a team of researchers in Argentina. While observing a field of sunflowers growing in dense rows, the team found that the plants’ stems self-organized into zigzag patterns as they grew. This arrangement minimized the shade the sunflowers cast on one another, ensuring each plant received the maximum possible amount of sunlight.

Meroz’s team has now studied this phenomenon in a controlled laboratory environment. “Unlike previous work, we tracked the movement of sunflower crowns during the whole experiment,” Meroz describes. “This is when we found that sunflowers move a lot via circumnutations, and we asked ourselves whether these movements might play a role in the self-organization process.”

To inform the analysis, Meroz’s team considered two key ingredients of self-organization. The first involved local interactions between individual plants – in this case, their ability to adapt their growth to avoid shading each other.

The second ingredient were the random, noisy motions that allow self-organized systems to explore a variety of possible states. This randomness enables plants to adapt to short-term environmental changes while maintaining stability in their growth patterns.

Tweaking noise

For their sunflowers, the researchers predicted that these random motions could be provided by the circumnutations first described by Charles and Francis Darwin. To investigate this idea, they ran simulations of groups of sunflowers based closely on the movements they had observed in the lab. In these simulations, they tweaked the amount of noise generated by circumnutation with a level of control that is not yet possible in real-world experiments.

“By comparing what we saw in the group experiments with our simulation data, we figured out the best balance of these factors,” explains Meroz’s colleague, Orit Peleg at the University of Colorado Boulder. “We also confirmed that real plants balance these factors in a way that leads to near-optimal minimization of shading.”

As expected, the results confirmed that the random movements of individual sunflowers play a vital role in minimizing the amount of shading experienced by each plant.

Peleg believes that their discovery has fascinating implications for our understanding of how plants behave. “It’s a bit surprising because we don’t usually think of random movement as having a purpose,” she says. “Yet, it’s vital for minimizing shading. This finding prompts us to view plants as active matter, with unique constraints imposed by their anchoring and growth-movement coupling.”

The research is described in Physical Review X.

The post Sunflowers ‘dance’ together to share sunlight appeared first on Physics World.

]]>
Research update Zigzag patterns created by circular motion of growing stems https://physicsworld.com/wp-content/uploads/2024/08/27-8-24-sunflower-1143037052-Shutterstock_Mykhailo-Baidala.jpg newsletter1
The most precise timekeeping device ever built https://physicsworld.com/a/the-most-precise-timekeeping-device-ever-built/ Tue, 27 Aug 2024 12:00:18 +0000 https://physicsworld.com/?p=116432 Colorado-based researchers have reduced the systematic uncertainty in their optical lattice clock to a record low. Ali Lezeik explains how they did it

The post The most precise timekeeping device ever built appeared first on Physics World.

]]>
If you want to make a clock, all you need is an oscillation – preferably one that is stable in frequency and precisely determined. Many systems will fit the bill, from the Earth’s rotation to pendulums and crystal oscillators. But if you want the world’s most precise clock, you’ll need to go to the US state of Colorado, where researchers from JILA and the University of Colorado, Boulder have measured the frequency of an optical lattice clock (OLC) with a record-low systematic uncertainty of 8.1 × 10−19 – equivalent to a fraction of a second throughout the age of the universe.

OLCs are atomic clocks that mark the passage of time using an electron that oscillates between two energy levels (the ground state 1S0 and clock state 3P0) in an atom such as strontium. The high frequency and narrow linewidth of this atomic transition makes these clocks orders of magnitude more precise than the atomic clocks used to redefine the second in 1968, which were based on a microwave transition in caesium atoms.

The high precision of OLCs gives them the potential to unlock technologies that can be used to sense quantities such as distances, the Earth’s gravitational field and even atomic properties such as the fine structure constant at extremely small scales. To achieve this precision, however, they must be isolated from external effects that can cause them to “tick” irregularly. This is why the atoms in an OLC are trapped in a lattice formed by laser beams and confined within a vacuum chamber.

An OLC that is isolated entirely from its environment would oscillate at the constant, natural frequency of the atomic transition, with an uncertainty of 0 Hz/Hz. In other words, its frequency would not change. However, in the real world, temperature, magnetic and electric fields, and even the collisional motion of the atoms in the lattice all influence the clock’s oscillations. These parameters therefore need to be very well controlled for the clock to operate at maximum precision.

Controlling blackbody radiation

According to Alexander Aeppli, a PhD student at JILA who was involved in setting the new record, the most detrimental environmental effect on their OLC is blackbody radiation (BBR). All thermal objects – light bulbs, human bodies, the vacuum chamber the atoms are trapped in – emit such radiation, and the electric field of this radiation couples to the atom’s energy levels. This causes a systematic shift that translates to an uncertainty in the clock’s frequency.

To minimize the effects of BBR, Aeppli and colleagues enclosed their entire system, including the vacuum chamber and optics for creating the clock, within a temperature-controlled box equipped with numerous temperature sensors. By running temperature-stabilized liquid around different parts of their experimental apparatus, they stabilized the air temperature and controlled the vacuum system temperature.

This didn’t completely solve the problem, though. BBR shift is the sum of a static component that scales with the fourth power of temperature and a dynamic component that scales with higher powers. Even after limiting the lab’s temperature fluctuations to a few millikelvin per day, the team still needed to carry out a systematic evaluation of the shift due to the dynamic component.

For this, the JILA-Boulder researchers turned to a 2013 study in which physicists in the US and Russia found a correlation between the uncertainty of the BBR shift and the lifetime of an electron occupying a higher-energy state (3D1) in strontium atoms. By measuring the lifetime of this 3D1 state, the team was able to calculate an uncertainty of 7.3 × 10−19 in the BBR shift.

To fully understand the atoms’ response to BBR, Aeppli explains that they also needed to measure the strength of transitions from the clock states. “The dominant transition that is perturbed by BBR radiation is at a relatively long wavelength,” he says. “This wavelength is longer than the spacing between the atoms, meaning that atoms can behave collectively, modifying the physics of this interaction. It took us quite some time to characterize this effect and involved almost a year of measurements to reduce its uncertainty.”

Photo of the vacuum chamber bathed in purple-blue light

Other environmental effects

BBR wasn’t the only environmental effect that needed systematic study. The in-vacuum mirrors used to create the lattice tend to accumulate electric charges, and the resulting stray electric fields produce a systematic DC Stark shift that changes the clock transition frequency. By shielding the mirrors with a copper structure, the researchers reduced these DC Stark shifts to below the 1 × 10−19 uncertainty level.

OLCs are also sensitive to magnetic fields. This is due to the Zeeman effect, which shifts the energy levels of an atom by different amounts in the presence of such fields. The researchers chose the least magnetically sensitive sub-states to operate their clock, but that still leaves a weaker second-order Zeeman shift for them to calibrate. In the latest work, they reached an uncertainty in this second-order Zeeman shift of 0.1 × 10−18, which is a factor of two smaller than previous measurements.

Even the lattice beams themselves cause an unwanted shift in the atoms’ transition frequency. This is known as the light or AC Stark shift, and it is due to the power of the laser beam. The researchers minimized this shift by ramping down the beam power just before starting the clock, but even at such low trapping powers, atoms in the different lattice sites can still interact, and atoms at the same site can collide. These events lead to a tunnelling and a density shift, respectively. While both are rather weak, the team nevertheless investigated their effect on the clock’s uncertainty and constrained them to below the 10−19 level.

How low can you go?

In early 2013, JILA scientists reported a then-record-low systematic uncertainty in their strontium OLC of 6.4 × 10−18. A year later, they managed to reduce this uncertainty by a factor of three, to 2.1 × 10−18. Ten years on, however, progress seems to have slowed: the latest uncertainty record improves on this value by a mere factor of two. Is there an intrinsic lower bound?

“The largest source of systematic uncertainty continues to be the BBR shift since it goes as temperature to the fourth power,” Aeppli says. “Even a small reduction in temperature can significantly reduce the shift uncertainty.”

To go below the 1 × 10−19 level, he explains that it would be advantageous to cool the system to cryogenic temperatures. Indeed, many OLC research groups are using this approach for their next-generation systems. Ultimately, though, while progress on optical clocks might not be quite as fast as it was 20 years ago, Aeppli says there is no obvious “floor”, no fundamental limit to the systematic uncertainty of optical lattice clocks. “There are plenty of clever people working on pushing uncertainty as low as possible,” he says.

The JILA-Boulder team reports its work in Physical Review Letters.

The post The most precise timekeeping device ever built appeared first on Physics World.

]]>
Analysis Colorado-based researchers have reduced the systematic uncertainty in their optical lattice clock to a record low. Ali Lezeik explains how they did it https://physicsworld.com/wp-content/uploads/2024/08/27-08-2024-Precise-optical-clock_Main-image.jpg newsletter
Abdus Salam: honouring the first Muslim Nobel-prize-winning scientist https://physicsworld.com/a/abdus-salam-honouring-the-first-muslim-nobel-prize-winning-scientist/ Tue, 27 Aug 2024 10:00:13 +0000 https://physicsworld.com/?p=116317 Claudia de Rham and Ian Walmsley pay tribute to the contributions of the great theorist Abdus Salam

The post Abdus Salam: honouring the first Muslim Nobel-prize-winning scientist appeared first on Physics World.

]]>

A child prodigy born in a humble village in British India on 29 January 1926, Abdus Salam became one of the world’s greatest theorists who tackled some of the most fundamental questions in physics. He shared the 1979 Nobel Prize for Physics with Sheldon Glashow and Steven Weinberg for unifying the weak and electromagnetic interactions. In doing so, Salam became the first Muslim scholar to win a science-related Nobel prize – and is so far the only Pakistani to achieve that feat.

After moving to the UK in 1946 just before the partition of India, Salam gained a double-first in mathematics and physics from the University of Cambridge and later did a PhD there in quantum electrodynamics. Following a couple of years back home in Pakistan, Salam returned to Cambridge, before spending the bulk of his career at Imperial College, London. He died aged 70 on 21 November 1996, his later life cruelly ravaged by a neurodegenerative disease.

Yet to many people, Salam’s life and contributions to science are not so well known despite his founding of the International Centre for Theoretical Physics (ICTP) in Trieste, Italy, exactly 60 years ago. Upon joining Imperial, he also became the first academic from Asia to hold a full professorship at a UK university. Keen to put Salam in the spotlight ahead of the centenary of Salam’s birth are Claudia de Rham, a theoretical physicist at Imperial, and quantum-optics researcher Ian Walmsley, who is currently provost of the college.

De Rham and Walmsley recently appeared on the Physics World Weekly podcast. An edited version of our conversation appears below.

How would you summarize Abdus Salam’s contributions to science?

CdR: Salam was one of the founders of modern physics. He pioneered the study of symmetries and unification, which helped contribute to the formulation of the Standard Model of particle physics. In 1967 he incorporated the Higgs mechanism – co-discovered by his Imperial colleague Tom Kibble – into electroweak theory, which unifies the electromagnetic and weak forces. It changed the way we see the world by underlining the importance of symmetry and by showing how some forces – which may appear different – are actually linked.

This breakthrough led him to win the 1979 Nobel Prize for Physics with Steven Weinberg and Sheldon Glashow, making him the first – in fact, so far, the only – Nobel laureate from Pakistan. Salam was also the first person from the Islamic world to win a Nobel prize in science and the most recent person from Imperial College to do so, which makes us very proud of him.

How did his connection to Imperial College come about?

CdR: After studying at Cambridge, he went back to Pakistan but realized that the scientific, opportunities there were limited. So he returned to Cambridge for a while, before being appointed a professor of applied mathematics at Imperial in 1957. That made him the first Asian academic to hold a professorship at any UK university. He then moved to the physics department at Imperial and stayed at the college for almost 40 years – for the rest of his life.

Large photo of Abdus Salam at the entrance the main library at Imperial College

For Salam, Imperial was his scientific home. He founded the theoretical physics group here, doing the work on quantum electromagnetics and quantum field theory that led to his Nobel prize. But he also did foundational work on renormalization, grand unification, supersymmetry and so on, making Imperial one of the world’s leading centres for fundamental physics research. Many of his students, like Michael Duff and Ray Rivers, also had an incredible impact in physics, paving the way for how we do quantum field theory today.

What was Salam like as a person?

IW: I had the privilege of meeting Salam when I was an undergraduate here in Imperial’s physics department in 1977. In the initial gathering of new students, he gave a short talk on his work and that of the theoretical physics group and the wider department. I didn’t understand much of what he said, but Salam’s presence was really important for motivating young people to think about – and take on – the hard problems and to get a sense of the kind of problems he was tackling. His enthusiasm was really fantastic for a young student like myself.

When he won the Nobel prize in 1979, I was by then a second-year student and there were a lot of big celebrations and parties in the department. There were a number of other luminaries at Imperial like Kibble, who’d made lots of important contributions. In fact, I think Salam’s group was probably the leading theoretical particle group in the UK and among the best in the world. He set it up and it was fantastic for the department to have someone of his calibre: it was a real boost.

How would you describe Salam’s approach to science?

CdR: Salam thought about science on many different levels. There wasn’t just the unification within science itself, but he saw science as a unifying force. As he showed when he set up the theoretical physics group at Imperial and, later, the ICTP in Trieste, he saw science as something that could bring people from all over the world together.

We’re used to that kind of approach today. But at the time, driving collaboration across the world was revolutionary. Salam wasn’t just an incredible scientist, but an incredible human being. He was eager to champion diversity – recognizing that it’s the best thing not just for science but for humanity too. Salam was ahead of his time in realizing the unifying power of science and being able to foster it throughout the world.

What impact has the ICTP had over the last 60 years?

CdR: The goal of the ICTP has been to combat the isolation and lack of resources that people in some parts of the world, especially the global south, were facing. It’s had a huge impact over the last 60 years and has now grown into a network of five institutions spread over four continents, all of which are devoted to advancing international collaboration and scientific expertise to the non-western world. It hosts around 6000 scientists every year, about 50% of whom are from the global south.

How well known do you think Salam is around the world?

IW: Is he well known in the physics community globally? Absolutely. I also think he is well regarded and known across the Muslim community. But is he well known to the general public as one of the UK’s greatest adopted scientists? Probably not. And I think that’s a shame because his skills as a pedagogue and his concern for people as a whole – and for science as a motivating force – are really important messages and things he really championed.

What activities has Imperial got planned for the centenary of Salam’s birth?

CdR: We want to use the centenary not only to promote and celebrate excellence in fundamental science but also to engage with people form the global south. In fact, we already had a 98th birthday celebration on campus earlier this year, where we renamed the Imperial Central Library, which is now called the Abdus Salam Library. Then there were public talks by various physicists, including the ICTP director Atisha Dabodkar and Tasneem Husain, who is Pakistan’s first female string theorist.

5 people stood in front of the Abdus Salam Library

We also held an exhibition here on campus about many aspects of Salam’s life for school children all around London to come and visit. It’s now moved to a permanent virtual home online. And we held an essay contest for school children from Pakistan to see how Salam has inspired them, selecting a few to go online. We also had a special documentary about Salam filmed called “A unifying force”.

What impact do you think those events have had?

IW: It was really great to name a building after him, especially as it’s the library where students congregate all the time. There’s a giant display on the wall outside that describes him and has a great picture of Salam. You can see it even without entering the library, which is great because you often have families taking their children and showing them the picture and reading the narrative. It’ll spread his fame a bit more, which is really important and really lovely.

CdR: One thing that was clear in the build-up to the event in January was just how much his life story resonates with people at absolutely every level. No matter your background or whether you’re a scientist or not, I think Salam’s life awakens the scientist in all of us – he connects with people. But as the centenary of his birth draws closer, we want to build on those initiatives. Fundamental, curiosity-driven research is a way to make connections with the global south so we’re very much looking forward to an even bigger celebration for his 100th birthday in 2026.

  • A full version of this interview can be heard on the 8 August 2024 episode of the Physics World Weekly podcast.

Abdus Salam: driven to success

Abdus Salam

Abdus Salam, like all geniuses, was not a straightforward character. That much is made clear in the 2018 documentary movie Salam: the First ****** Nobel Laureate directed by Anand Kamalakar and produced by Zakir Thaver and Omar Vandal. Containing interviews with Salam’s friends, family members and former colleagues, Salam is variously described as being “charismatic”, “humane”, “difficult”, “impatient”, “sensitive”, “gorgeous”, “bright”, “dismissive” and “charming”.

Despite him being the first Nobel-prize winner from Pakistan, the film also wonders why he is relatively poorly known and unrecognized in his homeland. The movie argues that this was down to his religious beliefs. Most Pakistanis are Sunnis but Salam was an Ahmadi, part of a minor Islamic movement. Opposition in Pakistan to the Ahmadis even led to its parliament declaring them non-Muslims in 1974, forbidden from professing their creed in public or even worshipping in their own mosques.

Those edicts, which led to Salam’s religious beliefs being re-awakened, also saw him effectively being ignored by Pakistan (hence the title of the movie). However, Salam was throughout his life keen to support scientists from less wealthy nations, such as his own, which is why he founded the International Centre for Theoretical Physics (ICTP) in Trieste in 1964.

Celebrating its 60th anniversary this year, the ICTP now has 45 permanent research staff and brings together more than 6000 leading and early-career scientists from over 150 nations to attend workshops, conferences and scientific meetings. It also has international outposts in Brazil, China, Mexico and Rwanda, as well as eight “affiliated centres” – institutes or university departments with which the ICTP has formal collaborations.

Matin Durrani

The post Abdus Salam: honouring the first Muslim Nobel-prize-winning scientist appeared first on Physics World.

]]>
Interview Claudia de Rham and Ian Walmsley pay tribute to the contributions of the great theorist Abdus Salam https://physicsworld.com/wp-content/uploads/2024/08/2024-08-Durrani-Abdus-Salam-featured.jpg
3D printing creates strong, stretchy hydrogels that stick to tissue https://physicsworld.com/a/3d-printing-creates-strong-stretchy-hydrogels-that-stick-to-tissue/ Mon, 26 Aug 2024 10:00:41 +0000 https://physicsworld.com/?p=116440 A new 3D printing method fabricates entangled hydrogels for medical applications

The post 3D printing creates strong, stretchy hydrogels that stick to tissue appeared first on Physics World.

]]>
A new method for 3D printing, described in Science, makes inroads into hydrogel-based adhesives for use in medicine.

3D printers, which deposit individual layers of a variety of materials, enable researchers to create complex shapes and structures. Medical applications often require strong and stretchable biomaterials that also stick to moving tissues, such as the beating human heart or tough cartilage covering the surfaces of bones at a joint.

Many researchers are pursuing 3D printed tissues, organs and implants created using biomaterials called hydrogels, which are made from networks of crosslinked polymer chains. While significant progress has been made in the field of fabricated hydrogels, traditional 3D printed hydrogels may break when stretched or crack under pressure. Others are too stiff to sculpt around deformable tissues.

Researchers at the University of Colorado Boulder, in collaboration with the University of Pennsylvania and the National Institutes of Standards and Technology (NIST), realized that they could incorporate intertwined chains of molecules to make 3D printed hydrogels stronger and more elastic – and possibly even allow them to stick to wet tissue. The method, known as CLEAR, sets an object’s shape using spatial light illumination (photopolymerization) while a complementary redox reaction (dark polymerization) gradually yields a high concentration of entangled polymer chains.

To their knowledge, the researchers say, this is the first time that light and dark polymerization have been combined simultaneously to enhance the properties of biomaterials fabricated using digital light processing methods. No special equipment is needed – CLEAR relies on conventional fabrication methods, with some tweaks in processing.

“This was developed by a graduate student in my group, Abhishek Dhand, and research associate Matt Davidson, who were looking at the literature on entangled polymer networks. In most of these cases, the entangled networks that form hydrogels with high levels of certain material properties…are made with very slow reactions,” explains Jason Burdick from CU-Boulder’s BioFrontiers Institute. “This is not compatible with [digital light processing], where each layer is reacted through short periods of light. The combination of the traditional [digital light processing] with light and the slow redox dark polymerization overcomes this.”

Experiments confirmed that hydrogels produced with CLEAR were fourfold to sevenfold tougher than hydrogels produced with conventional digital light processing methods for 3D printing. The CLEAR-fabricated hydrogels also conformed and stuck to animal tissues and organs.

“We illustrated in the paper the application of hydrogels printed with CLEAR as tissue adhesives, as others had previously defined material toughness as an important material property in adhesives. Through CLEAR, we can then process these adhesives into any structures, such as porous lattices or introduce spatial adhesion that may be of interest for biomedical applications,” Burdick says. “What is also interesting is that CLEAR can be used with other types of materials, such as elastomers, and we believe that it can be used across broad manufacturing methods.”

CLEAR could also have environmentally friendly implications for manufacturing and research, the researchers suggest, by eliminating the need for additional light or heat energy to harden parts. The researchers have filed for a provisional patent and will be conducting additional studies to better understand how tissues react to the printed hydrogels.

“Our work so far was mainly proof-of-concept of the method and showing a range of applications,” says Burdick. “The next step is to identify those applications where CLEAR can make an impact and then further explore those topics, whether this is specific to biomedicine or more broadly beyond this.”

The post 3D printing creates strong, stretchy hydrogels that stick to tissue appeared first on Physics World.

]]>
Research update A new 3D printing method fabricates entangled hydrogels for medical applications https://physicsworld.com/wp-content/uploads/2024/08/26-08-24-3D-Printer-Matt-Davidson.jpg newsletter1
Drowsiness-detecting earbuds could help drivers stay safe at the wheel https://physicsworld.com/a/drowsiness-detecting-earbuds-could-help-drivers-stay-safe-at-the-wheel/ Thu, 22 Aug 2024 15:00:41 +0000 https://physicsworld.com/?p=116407 In-ear electroencephalography could protect drivers, pilots and machine operators from the dangers of fatigue

The post Drowsiness-detecting earbuds could help drivers stay safe at the wheel appeared first on Physics World.

]]>
Drowsiness plays a major role in traffic crashes, injuries and deaths, and is considered the most critical hazard in construction and mining. A wearable device that can monitor fatigue could help protect drivers, pilots and machine operators from the life-threatening dangers of fatigue.

With this aim, researchers at UC Berkeley are developing techniques to detect signs of drowsiness in the brain, using a pair of prototype earbuds to perform electroencephalography (EEG) and other physiological measurements. Describing the device in Nature Communications, the team reports successful tests on volunteers.

“Wireless earbuds are something we already wear all the time,” says senior author Rikky Muller in a press statement. “That’s what makes ear EEG such a compelling approach to wearables. It doesn’t require anything extra. I was inspired when I bought my first pair of Apple’s AirPods in 2017. I immediately thought, ‘What an amazing platform for neural recording’.”

Improved design

EEG uses multiple electrodes placed on the scalp to non-invasively monitor the brain’s electrical activity – such as the alpha waves that increase when a person is relaxed or sleepy. Researchers have also demonstrated that multi-channel EEG signals can be recorded from inside the ear canal, using in-ear sensors and electrodes.

Existing in-ear devices, however, mostly use wet electrodes (which necessitate skin-preparation and hydrogel on the electrodes), contain bulky electronics and require customized earpieces for each user. Instead, Muller and colleagues aimed to create an in-ear EEG with long-lifespan dry electrodes, wireless electronics and a generic earpiece design.

In-ear EEG device

The researchers developed a fabrication process based on 3D printing of a polymer earpiece body and electrodes. They then plated the electrodes with copper, nickel and gold, creating electrodes that remain stable over months of use. To ensure comfort for all users, they designed small, medium and large earpieces (with slightly different electrode sizes to maximize electrode surface area).

The final medium-sized earpiece contains four 60 mm2 in-ear electrodes, which apply outward pressure to lower the electrode–skin impedance and improve mechanical stability, plus two 3 cm2 out-ear electrodes. Signals from the earpiece are read out and transmitted to a base station by a low-power wireless neural recording platform (the WANDmini) affixed to a headband.

Drowsiness study

To assess the earbuds’ performance, the team recorded 35 h of electrophysiological data from nine volunteers. Subjects wore two earpieces and did not prepare their skin beforehand or apply hydrogel to the electrodes. As well as EEG, the device measured signals such as heart beats (using electrocardiography) and eye movements (via electrooculography), collectively known as ExG.

To induce drowsiness, subjects played a repetitive reaction time game for 40–50 min. During this task, they rated their drowsiness every 5 min on the Karolinska Sleepiness Scale (KSS). The measured ExG data, reaction times and KSS ratings were used to generate labels for classifier models. Data were labelled as “drowsy” if the user reported a KSS score of 5 or higher and their reaction time had more than doubled since the first 5 min.

To create the alert/drowsy classifier, the researchers extracted relevant temporal and spectral features in standard EEG frequency bands (delta, theta, alpha, beta and gamma). They used these data to train three low-complexity machine learning models: logistic regression, support vector machines (SVM) and random forest. They note that spectral features associated with eye movement, relaxation and drowsiness were the most important for model training.

All three classifier models achieved high accuracy, with comparable performance to state-of-the-art wet electrode systems. The best-performing model (utilizing a SVM classifier) achieved an average accuracy of 93.2% when evaluating users it had seen before and 93.3% with never-before-seen users. The logistic regression model, meanwhile, is more computationally efficient and requires significantly less memory.

The researchers conclude that the results show promise for developing next-generation wearables that can monitor brain activity in work environments and everyday scenarios. Next, they will integrate the classifiers on-chip to enable real-time brain-state classification. They also intend to miniaturize the hardware to eliminate the need for the WANDmini.

“We plan to incorporate all of the electronics into the earbud itself,” Muller tells Physics World. “We are working on earpiece integration, and new applications, including the use of earbuds during sleep.”

The post Drowsiness-detecting earbuds could help drivers stay safe at the wheel appeared first on Physics World.

]]>
Research update In-ear electroencephalography could protect drivers, pilots and machine operators from the dangers of fatigue https://physicsworld.com/wp-content/uploads/2024/08/22-08-24-ear-EEG-fig1.jpg
Physics for a better future: mammoth book looks at science and society https://physicsworld.com/a/physics-for-a-better-future-mammoth-book-looks-at-science-and-society/ Thu, 22 Aug 2024 12:24:13 +0000 https://physicsworld.com/?p=116412 Our podcast guest is Christophe Rossel, co-author of EPS Grand Challenges

The post Physics for a better future: mammoth book looks at science and society appeared first on Physics World.

]]>
This episode of the Physics World Weekly podcast explores how physics can be used as a force for good – helping society address important challenges such as climate change, sustainable development, and improving health.

Our guest is the Swiss physicist Christophe Rossel, who is a former president of the European Physical Society (EPS) and an emeritus scientist at IBM Research in Zurich.

Rossel is a co-editor and co-author of the book EPS Grand Challenges, which looks at how science and physics can help drive positive change in society and raise standards of living worldwide as we approach the middle of the century. The huge tome weighs in at 829 pages, was written by 115 physicists and honed by 13 co-editors.

Rossel talks to Physics World’s Matin Durrani about the intersection of science and society and what physicists can do to make the world a better place.

The post Physics for a better future: mammoth book looks at science and society appeared first on Physics World.

]]>
Podcasts Our podcast guest is Christophe Rossel, co-author of EPS Grand Challenges https://physicsworld.com/wp-content/uploads/2024/08/21-8-24-Christophe-Rossel-list.jpg newsletter
Quantum sensor detects magnetic and electric fields from a single atom https://physicsworld.com/a/quantum-sensor-detects-magnetic-and-electric-fields-from-a-single-atom/ Thu, 22 Aug 2024 09:30:12 +0000 https://physicsworld.com/?p=116396 New device is like an MRI machine for quantum materials, say physicists

The post Quantum sensor detects magnetic and electric fields from a single atom appeared first on Physics World.

]]>
Researchers in Germany and Korea have fabricated a quantum sensor that can detect the electric and magnetic fields created by individual atoms – something that scientists have long dreamed of doing. The device consists of an organic semiconducting molecule attached to the metallic tip of a scanning tunnelling microscope, and its developers say that it could have applications in biology as well as physics. Some possibilities include sensing the presence of spin-labelled biomolecules and detecting the magnetic states of complex molecules on a surface.

Today’s most sensitive magnetic field detectors exploit quantum effects to map the presence of extremely weak fields. Among the most promising of these new-generation quantum sensors are nitrogen vacancy (NV) centres in diamond. These structures can be fabricated inside a nanopillar on the tip of an atomic force microscope (AFM) tip, and their spatial resolution is an impressively small 10–100 nm. However, this is still a factor of 10 to 100 larger than the diameter of an atom.

A spatial resolution of 0.1 nm

The new sensor developed by Andreas Heinrich and colleagues at the Forschungszentrum Jülich and Korea’s IBS Center for Quantum Nanoscience (QNS) can also be placed on a microscope tip – in this case, a scanning tunnelling microscope (STM). The difference is the spatial resolution of this atomic-scale device is just 0.1 nm, making it 100 to 1000 times more sensitive than devices based on NV centres.

The team made the sensor by attaching a molecule with an unpaired electron – a molecular spin – to the apex of an STM’s metallic tip. “Typically, the lifetime of a spin in direct contact with a metal is very short and cannot be controlled,” explains team member Taner Esat, who was previously at QNS and is now at Jülich. “In our approach, we brought a planar molecule known as 3,4,9,10-perylenetetracarboxylic-dianhydride (or PTCDA for short) into a special configuration on the tip using precise atomic-scale manipulation, thus decoupling the molecular spin.”

Determining the magnetic field of a single atom

In this configuration, Esat explains that the molecule is a spin ½ system, and in the presence of a magnetic field, it behaves like a two-level quantum system. This behaviour is due to the Zeeman effect, which splits the molecule’s ground state into spin-up and spin-down states with an energy difference that depends on the strength of the magnetic field. Using electron spin resonance in the STM, the researchers were able to detect this energy difference with a resolution of around ~100 neV. “This allowed us to determine the magnetic field of a single atom (which finds itself only a few atomic distances away from the sensor) that caused the change in spin states,” Esat tells Physics World.

The team demonstrated the feasibility of its technique by measuring the magnetic and electric dipole fields from a single iron atom and a silver dimer on a gold substrate with greater than 0.1 nm resolution.

The next step, says Esat, is to increase the new device’s magnetic field sensitivity by implementing more advanced sensing protocols based on pulsed electron spin resonance schemes and by finding molecules with longer spin decoherence times. “We hope to increase the sensitivity by a factor of about 1000, which would allow us to detect nuclear spins at the atomic scale,” he says.

A holy grail for quantum sensing

The new atomic-scale quantum magnetic field sensor should also make it possible to resolve spins in certain emerging two-dimensional quantum materials. These materials are predicted to have many complex magnetic orders, but they cannot be measured with existing instruments, Heinrich and his QNS colleague Yujeong Bae note. Another possibility would be to use the sensor to study so-called encapsulated spin systems such as endohedral-fullerenes, which comprise a magnetic core surrounded by an inert carbon cage.

“The holy grail of quantum sensing is to detect individual nuclear spins in complex molecules on surfaces,” Heinrich concludes. “Being able to do so would make for a magnetic resonance imaging (MRI) technique with atomic-scale spatial resolution.”

The researchers detail their sensor in Nature Nanotechnology. They have also prepared a video to illustrate the working principle of the device and how they fabricated it.

The post Quantum sensor detects magnetic and electric fields from a single atom appeared first on Physics World.

]]>
Research update New device is like an MRI machine for quantum materials, say physicists https://physicsworld.com/wp-content/uploads/2024/08/Low-Res_2024_06_19_Esat_005.jpg
Software expertise powers up quantum computing https://physicsworld.com/a/software-expertise-powers-up-quantum-computing/ Wed, 21 Aug 2024 14:37:58 +0000 https://physicsworld.com/?p=116387 Combining research excellence with a direct connection to the National Quantum Computing Centre, the Quantum Software Lab is focused on delivering effective solutions to real-world problems

The post Software expertise powers up quantum computing appeared first on Physics World.

]]>
Making a success of any new venture can be a major challenge, but it always helps to have powerful partnerships. In the case of the Quantum Software Lab (QSL), established in April 2023 as part of the University of Edinburgh’s School of Informatics, its position within one of the world’s leading research centres for computer science offers direct access to expertise spanning everything from artificial intelligence through to high-performance computing. But the QSL also has a strategic alliance with the UK’s National Quantum Computing Centre (NQCC), providing a gateway to emerging hardware platforms and opening up new opportunities to work with end users on industry-relevant problems.

Bringing those worlds together is Elham Kashefi, who is both the director of the QSL and Chief Scientist of the NQCC. In her dual role, Kashefi is able to connect and engage with the global research community, while also exploiting her insights and ideas to shape the technology programme at the national lab. “Elham Kashefi is the most vibrant and exuberant character, and she has all the right attitudes to bring diverse people together to tackle the big challenges we are facing in quantum computing,” says Sir Peter Knight, the architect behind the UK’s National Quantum Technologies Programme. “Elham has the ability to apply insights from her background in computer science in a way that helps physicists like me to make the hardware work more effectively.”

The QSL’s connection to the NQCC imbues its activities with a strong focus on innovation, centring its development programme around the objective of demonstrating quantum utility – in other words, delivering reliable and accurate quantum solutions that offer a genuine improvement over classical computing. “Our partnership with the QSL is all about driving user adoption,” says NQCC director Michael Cuthbert. “The NQCC can provide a front door to the end-user community and raise awareness of the potential of quantum computing, while our colleagues in Edinburgh bring the academic expertise and rigour to translate the mathematics of quantum theory into use cases and applications that benefit all parts of our society and the economy.”

Since its launch, the QSL has become the largest research group for quantum software and algorithm development in the UK, with more than 50 researchers and PhD students. This core team is also supported by number of affiliate members from across the University of Edinburgh, notably the EPCC supercomputing centre, as well as from the Sorbonne University in France, where Kashefi also has a research role.

Within this extended network Kashefi and her faculty team have been working to establish a research culture that is based on collective success rather than individual endeavour. “There is so much discovery and innovation happening right now, and we set ourselves the goal of bringing disparate pieces together to establish a coherent programme,” she explains. “What has made me very happy is that we are now focusing on what we can achieve by combining our knowledge and expertise, rather than what we can do on our own.”

Within the Lab’s core programme, the Quantum Advantage Pathfinder, the primary goal is to work with end users in industry and the public sector to identify key computational roadblocks and translate them into research problems that can be addressed with quantum techniques. Once an algorithm has been devised and implemented, a crucial step of the process is to benchmark the solution to assess what sort of benefit it might offer over a conventional supercomputer.

“We are all academic researchers, but within the QSL we are nurturing a start-up culture where we want to understand and address the needs of the ecosystem,” says Kashefi. “For each project we are following the full pathway from the initial pain point identified by our industry partners through to a commercial application where we can show that quantum computing has delivered a genuine advantage.”

In just one example, application engineers from the NQCC and software developers from the QSL have been working with the high-street bank HSBC to explore the benefits of quantum computing for tackling the growing problem of financial fraud. HSBC already exploits classical machine learning to detect anomalous transactions that could indicate criminal behaviour, and the project team – which also includes hardware provider Rigetti – has been investigating whether quantum machine learning could deliver an advantage that would reduce risk and enable the bank to improve its anti-fraud services.

Quantum Software Lab

Alongside these problem-focused projects, the discovery-led nature of the academic environment also provides the QSL with the freedom to reverse the pipeline: to develop optimal approaches for a class of quantum algorithms or protocols that could be relevant for many different application areas. One project, for example, is investigating how hybrid quantum/classical algorithms could be exploited to solve big data problems using a small-scale quantum computer, while another is developing a unified benchmarking approach that could be applied across different hardware architectures.

For the NQCC, meanwhile, Cuthbert believes that the insights gained from this more universal approach will be crucial for planning future activities at the national lab. “Theoretical advances that are focused on the practical utilization of quantum computing will inform our technology programme and help us to build an effective quantum ecosystem,” he says. “It is vitally important that we understand how different elements of theory are developing, and what new techniques and discoveries are emerging in classical computing.”

Indeed, the importance of theory and informatics for accelerating the development of useful quantum computing is underlined by the QSL’s leading role in two of the new quantum hubs that were launched by the UK government at the end of July. For the one that will be focused on quantum computing, which is based at the University of Oxford, QSL researchers will take the lead on developing software tools that will help to extract more power from emerging quantum hardware, such as quantum error correction, distributed quantum computing, and hybrid quantum/classical algorithms. The QSL team will also investigate novel protocols for secure multi-party computing through its partnership with the Integrated Quantum Networks hub, which is being led by Heriot-Watt University.

Sir Peter Knight

At the same time, the QSL’s direct link to the NQCC will help to ensure that these software tools advance in tandem with the rapidly evolving capabilities of the quantum processors. “You need a marriage between the hardware and software to drive progress and work out where the roadblocks are,” comments Sir Peter. “Continuous feedback between algorithm development, the design of the quantum computing stack, and the physical constraints of the hardware creates a virtuous circle that produces better results within a shorter timeframe.”

An integral part of that accelerated co-development is the NQCC’s development of hardware platforms based on superconducting qubits, trapped ions and neutral atoms, while the national lab is also set to host seven quantum testbeds that are now being installed by commercial hardware developers. Once the testbeds are up and running in March 2025, there will be a two-year evaluation phase in which QSL researchers and the UK’s wider quantum community will be able to work with the NQCC and the hardware companies to understand the unique capabilities of each technology platform, and to investigate which qubit modalities are most suited to solving particular types of problems.

One key focus for this collaborative work will be developing and testing novel schemes for error correction, since it is becoming clear that quantum machines with even modest numbers of qubits can address complex problems if the noise levels can be reduced. Researchers at the QSL are now working to translate recent theoretical advances into software that can run on real computer architectures, with the testbeds providing a unique opportunity to investigate which error-correction codes can deliver the optimal results for each qubit modality.

Supporting these future endeavours will be a new Centre for Doctoral Training (CDT) for Quantum Informatics, led by the University of Edinburgh in collaboration with the University of Oxford, University College London, the University of Strathclyde and Heriot-Watt University.

“As part of their training, each cohort will spend two weeks at the NQCC, enabling the students to learn key technical skills as well as gaining an understanding of wider issues, such as the importance of responsible and ethical quantum computing,” says CDT director Chris Heunen, a senior member of the QSL team. “During their placement the students will also work with the NQCC’s applications engineers to solve a specific industry problem, exposing them to real-world use cases as well as the hardware resources installed at the national lab.”

With the CDT set to train around 80 PhD students over the next eight years, Kashefi believes that it will play a vital role in ensuring the long-term sustainability of the QSL’s programme and the wider quantum ecosystem. “We need to train a new generation of quantum innovators,” she says. “Our CDT will provide a unique programme for enabling young people to learn how to use a quantum computer, which will help us in our goal to deliver innovative solutions that derive real value from quantum technologies.”

The post Software expertise powers up quantum computing appeared first on Physics World.

]]>
Analysis Combining research excellence with a direct connection to the National Quantum Computing Centre, the Quantum Software Lab is focused on delivering effective solutions to real-world problems https://physicsworld.com/wp-content/uploads/2024/08/web-QSL-launch-2023.jpg newsletter
Vacuum-sealed tubes could form the backbone of a long-distance quantum network https://physicsworld.com/a/vacuum-sealed-tubes-could-form-the-backbone-of-a-long-distance-quantum-network/ Wed, 21 Aug 2024 14:00:15 +0000 https://physicsworld.com/?p=116394 Theoretical study proposes a "revolutionary" new method for constructing the future quantum Internet

The post Vacuum-sealed tubes could form the backbone of a long-distance quantum network appeared first on Physics World.

]]>
A network of vacuum-sealed tubes inspired by the “arms” of the LIGO gravitational wave detector could provide the foundations for a future quantum Internet. The proposed design, which its US-based developers describe as both “revolutionary” and feasible, could support communication rates as high as 1013 quantum bits (qubits) per second. This would exceed currently-available quantum channels based on satellites or optical fibres by at least four orders of magnitude, though members of the team note that implementing the design will be challenging.

Quantum computers outperform their classical counterparts at certain problems. Realizing their full potential, however, will require connecting multiple quantum machines via a network that can transmit quantum information over long distances, just as the Internet does with classical information.

One way of creating such a network would be to use existing technologies such as fibre optics cables or satellites. Both technologies transmit classical information using photons, and in principle they can transmit quantum information using photonic qubits, too. The problem is that they are inherently “lossy”, with photons being absorbed by the fibre or (to a lesser degree) by the Earth’s atmosphere on their way to and from the vacuum of space. This loss of information is particularly challenging for quantum networks, as qubits cannot be “copied” in the same way that classical bits can.

Inspired by LIGO

The proposal put forward by Liang Jiang and colleagues at the University of Chicago’s Pritzker School of Molecular Engineering, Stanford University and the California Institute of Technology aims to solve this problem by combining the advantages of satellite- and fibre-based communications. “In a vacuum, you can send a lot of information without attenuation,” explains team member Yesun Huang, the lead author of a Physical Review Letters paper on the proposal. “But being able to do that on the ground would be ideal.”

The new design for a long-distance quantum network involves connecting quantum channels made from vacuum-sealed tubes fitted with a series of lenses. These vacuum beam guides (VBGs), as they are known, measure around 20 cm in diameter, and Huang says they could span thousands of kilometres while supporting the transmission of 10 trillion qubits per second. “Photons carrying quantum information could travel through these tubes with the lenses placed every few kilometres in the tubes to ensure they do not spread out too much and stay focused,” he explains.

Infographic showing a map of the US with "backbone" vacuum quantum channels connecting several major cities, supplemented with shorter fibre-based communication channels reaching smaller hubs. A smaller diagram shows the positioning of lenses along the vacuum channel between quantum nodes.

The new design is inspired by the system that the Laser Interferometer Gravitational-Wave Observatory (LIGO) experiment employs to detect gravitational waves. In LIGO, twin laser beams travel down two tubes – the “arms” of the interferometer – that are arranged in an L-shape and kept under ultrahigh vacuum. Mirrors precisely positioned at the ends of each arm reflect the laser light back down the tubes and onto a detector. When a gravitational wave passes through this set-up, it distorts the distance travelled by each laser beam by a tiny but detectable amount.

Engineering challenges, but a big payoff

While LIGO’s arms measure 4::km in length, the tubes in Jiang and colleagues’ experiments could be much smaller. They would also need only a moderate vacuum of 10-4 atmospheres of pressure as opposed to LIGO’s 10-11 atm. Even so, the researchers acknowledge that implementing their technology will not be simple, with several civil engineering issues still to be addressed.

For the moment, the team is focusing on small-scale experiments to characterize the VBGs’ performance. But members are thinking big. “Our hope is to realize these channels over a continental scale,” Huang tells Physics World.

The benefits of doing so would be significant, he argues. “As well as benefiting secure quantum communication (quantum key distribution protocols, for example), the new VBG channels might also be employed in other quantum applications,” he says. As examples, he cites ultra-long-baseline optical telescopes, quantum networks of clocks, quantum data centres and delegated quantum computing.

Jiang adds that with the entanglement created from VBG channels, the researchers also hope to improve the performance of coordinating decisions between remote parties using so-called quantum telepathy – a phenomenon whereby two non-communicating parties can exhibit correlated behaviours that would be impossible to achieve using classical methods.

The post Vacuum-sealed tubes could form the backbone of a long-distance quantum network appeared first on Physics World.

]]>
Research update Theoretical study proposes a "revolutionary" new method for constructing the future quantum Internet https://physicsworld.com/wp-content/uploads/2024/08/21-08-2024-Liang-Jiang.jpg newsletter1
Solar-driven atmospheric water extractor provides continuous freshwater output https://physicsworld.com/a/solar-driven-atmospheric-water-extractor-provides-continuous-freshwater-output/ Wed, 21 Aug 2024 10:15:52 +0000 https://physicsworld.com/?p=116399 Standalone device harvests water out of air without requiring maintenance, solely using sunlight

The post Solar-driven atmospheric water extractor provides continuous freshwater output appeared first on Physics World.

]]>
Freshwater scarcity affects 2.2 billion people around the world, especially in arid and remote regions. More work needs to be done to develop new technologies that can provide freshwater in regions where there is a lack of suitable water for drinking and irrigation. Harvesting moisture from the air is one approach that has been trialled over the years with varying degrees of success.

“Water scarcity is one of the major challenges faced by the globe, which is particularly important in Middle East regions. Depending on the local conditions, one needs to identify all possible water sources to get fresh water for our daily use,” explains Qiaoqiang Gan, from King Abdullah University of Science and Technology (KAUST).

Gan and his team have recently developed a solar-driven atmospheric water extraction (SAWE) device that can continuously harvest moisture from the air to supply clean water to people in humid climates.

New development in an existing area

Technologies for harvesting water from the air have been around for many years, but SAWEs have faced various obstacles – one of the main being slow kinetics in the sorbent materials. In SAWEs, the sorbent material first captures moisture from the air. Once saturated, the system is sealed and exposed to sunlight to extract the water.

The slow kinetics means that only one cycle is possible per day with most devices, so they have traditionally worked using a two-stage approach – moisture capture at night and desorption via sunlight during the day. Many systems have low outputs, and require manual switching between cycles, so they cannot provide continuous water harvesting.

This could be about to change, because the system developed by Gan and colleagues can produce water continuously. “We can use the extracted water from the air for irrigation with no need for tap water. This is an attractive technology for regions with humid air but no access to fresh water,” says Gan.

Continuous water production

The SAWE developed at KAUST passively alternates between the two stages and can cycle continuously without human intervention. This was made possible by the inclusion of mass transport bridges (MTBs) that provide a connection between the water capture and water generation mechanisms.

The MTBs comprise vertical microchannels filled with a salt solution to absorb water from the atmosphere. Once saturated, the water-rich salt solution is pulled up via capillary action into an enclosed high-temperature chamber. Here, a solar absorber generates concentrated vapour, which then condenses on the chamber wall, producing freshwater. The concentrated salt solution then diffuses back down the channel to collect more water.

Under 1-sun illumination at 90% relative humidity, a prototype SAWE system with an evaporation area of 3 × 3 cm consistently produced fresh water at a rate of 0.65 L/m2/h. The researchers found that the system could also function in more arid environments with relative humidity as low as 40% and that – in regions with abundant solar irradiance and high humidity – it had a maximum water production potential of 4.6 L/m2 per day.

Scaling up in Saudi Arabia

Following the initial tests, the researchers built a scaled-up system (with an evaporation area of 13.5 × 24 cm) in Thuwal, Saudi Arabia, that was just as affordable and simple to produce as the small-scale prototype. They tested the system over 35 days across two seasons.

“Saudi Arabia launched an aggressive initiative known as Saudi Green Initiative, aiming to plant 10 billion trees in the country. The key challenge is to get fresh water for irrigation,” Gan explains. “Our technology provided a potential solution to address the water needs in suitable regions like the core area near the Red Sea and Arabic Bay, where they have humid air but no sufficient fresh water.”

The tests in Saudi Arabia showed that the scaled-up system could produce 2–3 L/m2 of freshwater per day during summer and 1–2.8 L/m2 per day during the autumn. The water harvested was also used for off-grid irrigation of Chinese cabbage plants in the local harvesting area, showing its potential for use in remote areas that lack access to large-scale water sources.

Looking ahead, Gan tells Physics World that “we are developing prototypes for the atmospheric water extraction module to irrigate plants and trees, as the water productivity can meet the water needs of many plants in their seeding stage”.

The research is described in Nature Communications.

The post Solar-driven atmospheric water extractor provides continuous freshwater output appeared first on Physics World.

]]>
Research update Standalone device harvests water out of air without requiring maintenance, solely using sunlight https://physicsworld.com/wp-content/uploads/2024/08/21-08-24-solar-powered-water-extractor.jpg newsletter1
Half-life measurement of samarium-146 could help reveal secrets of the early solar system https://physicsworld.com/a/half-life-measurement-of-samarium-146-could-help-reveal-secrets-of-the-early-solar-system/ Tue, 20 Aug 2024 15:51:10 +0000 https://physicsworld.com/?p=116378 Isotope is extracted from an accelerator target

The post Half-life measurement of samarium-146 could help reveal secrets of the early solar system appeared first on Physics World.

]]>
The radioactive half-life of samarium-146 has been measured to the highest accuracy and precision so far. Researchers at the Paul Scherrer Institute (PSI) in Switzerland and the Australian National University in Canberra made their measurement using waste from the PSI’s neutron source and the result should help scientists gain a better understanding of the history of the solar system.

With a half-life of 92 million years, samarium-146 is ideally suited for dating events that occurred early in the history of the solar system. These include volcanic activity on the Moon, the formation of meteorites, and the differentiation of Earth’s interior into distinct layers.

Samarium-146 in the early solar system was probably produced in a nearby supernova as our galaxy was forming about 4.5 billion years ago. Thanks to the isotope’s relatively long half-life, it would have been incorporated into nascent planets and asteroids. The isotope then slowly vanished from the solar system. It is now so rare that it is considered an extinct isotope, whose previous existence is inferred from the presence of the neodymium isotope to which it decays.

There is another isotope, samarium-147, with a half-life that is 1000 times longer than samarium-146. While the two isotopes have identical chemical properties, samarium-147 currently accounts for about 15% of samarium on Earth. Together, these two isotopes can be used for dating rocks, but only if their half-lives are known to sufficiently high accuracy.

Huge range

Unfortunately, the half-life of samarium-146 has proven notoriously difficult to measure. Over the past few decades, numerous studies have placed its value somewhere between 60 and 100 million years, but its exact value within this range has remained uncertain. The main reason for this uncertainty is that the isotope does not occur naturally on Earth and instead is made in tiny quantities in nuclear physics experiments.

In previous studies, the isotope was created by irradiating other samarium isotopes with protons or neutrons. However, this approach has drawbacks. “The main disadvantages are the cost and time required for dedicated irradiation and the fact that the desired isotope is made of the same element as the target material itself,” explains Rugard Dressler at PSI’s Laboratory for Radiochemistry. “This rules out the possibility of separating samarium-146 by chemical means alone.”

To overcome these limitations, a team led by Dorothea Schumann at PSI looked to the Swiss Spallation Neutron Source (SINQ) as a source of the isotope. SINQ creates neutrons by smashing protons into solid targets, which are damaged in the process. To better understand how this damage occurs, a range of different target materials have been irradiated at SINQ. This included tantalum, which Schumann identified as the most promising material to extract a quantity of samarium-146 in solution using a sequence of highly selective radiochemical separation and purification steps.

“Only in this way it was possible to obtain a sufficient amount of samarium-146 for the precise determination of its half-life – a possibility that is not available anywhere else around the world,” explains PSI’s Zeynep Talip.

Then they used some of the solution to create a thin layer of samarium oxide on a graphite substrate. Using mass spectrometers at PSI and in Australia to study their original solution, the team determined that there were  6.28×1013 samarium-146 nuclei in their sample.

Alpha particles

The sample was place at a well-defined distance from a carefully calibrated alpha radiation detector. By measuring the energy of emitted alpha particles, the team confirmed that the particles were produced by the decay of samarium-146. Over the course of three months, they measured the isotope’s decay rate and found it to be just under 54 decays per hour.

From this, they calculated the samarium-146 half-life to be 92 million years, with an uncertainty of just 2.6 million years.

“The half-life derived in our study shows that the results from the last century are compatible with our value within their uncertainties,” Dressler notes. “Furthermore, we were able to reduce the uncertainty considerably.”

This result marks an important breakthrough in an experimental challenge that has persisted for decades, and could soon provide a new window into the distant past. “A more precise determination of the half-life of will pave the way for a more detailed and accurate chronology of processes in our solar system and geological events on Earth,” says Dressler.

The research is described in Scientific Reports.

The post Half-life measurement of samarium-146 could help reveal secrets of the early solar system appeared first on Physics World.

]]>
Research update Isotope is extracted from an accelerator target https://physicsworld.com/wp-content/uploads/2024/08/20-8-24-Samarium-half-life.jpg
Enabling battery quality at scale https://physicsworld.com/a/enabling-battery-quality-at-scale/ Tue, 20 Aug 2024 13:59:59 +0000 https://physicsworld.com/?p=115702 The Electrochemical Society in partnership with BioLogic explores how high-throughput inspection can enable battery quality at scale

The post Enabling battery quality at scale appeared first on Physics World.

]]>

Battery quality lies at the heart of major issues relating to battery safety, reliability, and manufacturability. This talk reviews the challenges and opportunities to enable battery quality at scale. First, the interplay between various battery failure modes and their numerous root causes is described. Then, which failure modes are best detected by electrochemistry, and which are not, is discussed. Finally, how improved inspection – specifically, high-throughput computed tomography (CT) – can play a role in solving the battery quality challenge is reviewed.

An interactive Q&A session follows the presentation.

Peter Attia is co-founder and chief technical officer of Glimpse. Previously, he worked as an engineering lead on some of Tesla’s toughest battery failure modes and managed a team focused on battery data analysis. Peter holds a PhD from Stanford, where he developed seminal machine learning methods for battery lifetime prediction and optimization. He has received honours such as Forbes 30u30 but has not written a bestselling book on aging.

The Electrochemical Society

 

The post Enabling battery quality at scale appeared first on Physics World.

]]>
Webinar The Electrochemical Society in partnership with BioLogic explores how high-throughput inspection can enable battery quality at scale https://physicsworld.com/wp-content/uploads/2024/07/ECS_image_2024_09_18.jpg
AI-assisted photonic detector identifies fake semiconductor chips https://physicsworld.com/a/ai-assisted-photonic-detector-identifies-fake-semiconductor-chips/ Tue, 20 Aug 2024 12:35:14 +0000 https://physicsworld.com/?p=116361 New technique could reduce risks of unwanted surveillance, chip failure and theft, say researchers

The post AI-assisted photonic detector identifies fake semiconductor chips appeared first on Physics World.

]]>
Diagram of the RAPTOR detection system

The semiconductor industry is an economic powerhouse, but it is not without its challenges. As well as shortages of new semiconductor chips, it increasingly faces an oversupply of counterfeit ones. The spread of these imitations poses real dangers for the many sectors that rely on computer chips, including aviation, finance, communications, artificial intelligence and quantum technologies.

Researchers at Purdue University in the US have now combined artificial intelligence (AI) and photonics technology to develop a robust new method for detecting counterfeit chips. The new method could reduce the risks of unwanted surveillance, chip failure and theft within the $500 bn global semiconductor industry by reining in the market for fake chips, which is estimated at $75 bn.

The main way of detecting counterfeit semiconductor chips relies on “baking” security tags into chips or their packaging. Such tags work using technologies such as physical unclonable functions made from media such as arrays of metallic nanomaterials. These structures can be engineered to scatter light strongly in specific patterns that can be detected and used as a “fingerprint” for the tagged chip.

The problem is that these security structures are not tamper-proof. They can degrade naturally – for example, if temperatures get too high. If they are printed on packaging, they can also be rubbed off, either accidentally or intentionally.

Embedded gold nanoparticles

The Purdue researchers developed an alternative optical anti-counterfeiting technique for semiconductor devices based on identifying modifications in the patterns of light scattered off nanoparticle arrays embedded in chips or chip packaging. Their approach, which they call residual attention-based processing of tampering response (RAPTOR), relies on analysing the light scattered before and after an array has degraded naturally or been tampered with.

To make the technique work, a team led by electrical and computer engineer Alexander Kildishev embedded gold nanoparticles in the packaging of a packet of semiconductor chips. The team then took several dark-field microscope images of random places on the packaging to record the nanoparticle scattering patterns. This made it possible to produce high-contrast images even though the samples being imaged are transparent to light and provide little to no light absorption contrast. The team then stored these measurements for later authentication.

“If someone then tries to swap the chip, they not only have to embed the gold nanoparticles, but they also have to place them all in the original locations,” Kildishev explains.

The role of artificial intelligence

To guard against false positives caused by natural abrasions disrupting the nanoparticles, or a malicious actor getting close to replacing the nanoparticles in the right way, the team trained an AI model to distinguish between natural degradation and malicious tampering. This was the biggest challenge, Kildishev tells Physics World. “It [the model] also had to identify possible adversarial nanoparticle filling to cover up a tampering attempt,” he says.

Writing in Advanced Photonics, the Purdue researchers show that RAPTOR outperforms current state-of-the-art counterfeit detection methods (known as the Hausdorff, Procrustes and average Hausdorff metrics) by 40.6%, 37.3%, and 6.4% respectively. The analysis process takes just 27 ms, and it can verify a pattern’s authenticity in 80 ms with nearly 98% accuracy.

“We took on this study because we saw a need to improve chip authentication methods and we leveraged our expertise in AI and nanotechnology to do just this,” Kildishev says.

The Purdue researchers hope that other research groups will pick up on the possibilities of combining AI and photonics for the semiconductor industry. This would help advance deep-learning-based anti-counterfeiting methods, they say.

Looking forward, Kildishev and colleagues plan to improve their nanoparticle embedding process and streamline the authentication steps further. “We want to quickly convert our approach into an industry solution,” Kildishev says.

The post AI-assisted photonic detector identifies fake semiconductor chips appeared first on Physics World.

]]>
Research update New technique could reduce risks of unwanted surveillance, chip failure and theft, say researchers https://physicsworld.com/wp-content/uploads/2024/08/circuit-board-20436508-iStock_Henrik5000.jpg
Fast Monte Carlo dose calculation with precomputed electron tracks and GPU power https://physicsworld.com/a/fast-monte-carlo-dose-calculation-with-precomputed-electron-tracks-and-gpu-power/ Tue, 20 Aug 2024 08:59:15 +0000 https://physicsworld.com/?p=116282 Join the audience for a live webinar on 24 September 2024 sponsored by LAP GmbH Laser Applikationen

The post Fast Monte Carlo dose calculation with precomputed electron tracks and GPU power appeared first on Physics World.

]]>

In this webinar, we will explore innovative advancements in Monte Carlo based dose calculations that are poised to impact radiation oncology quality assurance. This expert session will focus on new developments in 3D dose calculation engines and improved dosimetry capabilities.

Designed for medical physics and dosimetrist experts, the discussion will outline the latest developments and emphasize how these can improve dose calculation accuracy, treatment verification processes, and clinical workflows in general. Join us in understanding better how fast Monte Carlo can contribute to advancing quality assurance in radiation therapy.

An interactive Q&A session follows the presentation.

Veng Jean Heng, PhD, is a medical physics resident at Stanford University. He received both an MSc and a PhD from McGill University. During his MSc, he performed Monte Carlo beam and dose-to-outcome modelling for CyberKnife patients. His PhD was on the clinical implementation of a mixed photon-electron beam radiation therapy technique. His current research interests revolve around the development of dose calculation and optimization methods.

 

Carlos Bohorquez, MS, DABR, is the product manager for RadCalc at LifeLine Software Inc., a part of the LAP Group. An experienced board-certified clinical physicist with a proven history of working in the clinic and medical device industry, Carlos’ passion for clinical quality assurance is demonstrated in the research and development of RadCalc into the future.

 

The post Fast Monte Carlo dose calculation with precomputed electron tracks and GPU power appeared first on Physics World.

]]>
Webinar Join the audience for a live webinar on 24 September 2024 sponsored by LAP GmbH Laser Applikationen https://physicsworld.com/wp-content/uploads/2024/08/20240924_LAP-image-1.jpg
Could targeted alpha therapy help treat Alzheimer’s disease? https://physicsworld.com/a/could-targeted-alpha-therapy-help-treat-alzheimers-disease/ Tue, 20 Aug 2024 08:30:00 +0000 https://physicsworld.com/?p=116342 Researchers demonstrate that targeted alpha-particle treatments can reduce the level of amyloid-beta plaque in mouse brain tissues

The post Could targeted alpha therapy help treat Alzheimer’s disease? appeared first on Physics World.

]]>
Alzheimer’s disease is a neurodegenerative disorder with limited treatment options. The causes of Alzheimer’s are complex and not entirely understood. It is commonly thought, however, that the build-up of amyloid-beta plaques and tangles of tau proteins in the brain leads to nerve cell death and dementia. A team at the University of Utah is investigating a new way to use radiation to reduce such deposits and potentially alleviate Alzheimer’s symptoms.

Developing a therapy for Alzheimer’s disease is a key goal for many researchers. One recent study, for example, showed evidence that reducing amyloid-beta plaques with a newly approved antibody-based drug improved cognition in patients with early-stage Alzheimer’s. Alongside, scientists are studying non-pharmacological approaches such as whole-brain, low-dose ionizing radiation, which has been shown to break up plaques in mice and exhibited a positive cognitive effect in preliminary clinical studies.

While promising, whole-brain irradiation unavoidably delivers radiation dose to healthy tissues. Instead, the University of Utah team is exploring the potential of targeted alpha therapy (TAT) to reduce amyloid plaque concentrations while minimizing damage to healthy tissue and the associated side effects.

“Our goal was to build on these studies and, as opposed to irradiating the whole brain, target the plaques specifically,” explains lead author Tara Mastren. “TAT could have potential benefits compared to the current antibody treatment, as much smaller doses are required to achieve an effect. Currently, it is hard to say if it will be better as this is new territory and studies need to be done to prove that.”

Targeted irradiation

TAT works by delivering an alpha particle-emitting radionuclide directly to a target, where it releases energy into its immediate surroundings. As alpha particles only travel a few micrometres in tissue, they deliver a highly localized dose. The approach has already proved effective for treating metastatic cancers, and the Utah team postulated that it could also be used to break bonds within amyloid-beta aggregates and facilitate plaque clearance.

To perform TAT, Mastren and colleagues synthesized a compound called BPy (a benzofuran pyridyl derivative) that targets amyloid-beta plaques. They linked BPy to the radionuclide bismuth-213 (213Bi), which has a short half-life of 46 min and decays by emitting a single alpha particle, thereby creating [213Bi]-BiBPy.

To examine whether TAT could reduce amyloid-beta concentrations, the researchers incubated [213Bi]-BiBPy with homogenates created from the brain tissue of mice genetically modified to develop amyloid plaques. After 24 h, they measured the concentration of amyloid-beta in the samples using Western blot and enzyme-linked immunosorbent assays.

Both analysis methods revealed a significant, dose-dependent reduction in amyloid-beta following incubation with [213Bi]-BiBPy, with plaque reduced to below the detection limits. Incubating the brain homogenate with free 213Bi also reduced levels of amyloid-beta, but to a significantly lesser extent. Other proteins in the homogenate were not affected, suggesting a lack of off-target damage.

The team found that a dose of 0.01488 MBq per picogram of amyloid beta was required to reduce amyloid by 50% in vitro. Mastren notes that this finding must now be investigated in vivo, as biological processes in a living brain differ from those in postmortem tissue. “However, this value gives a starting point for our in vivo studies,” she adds.

To confirm the targeted binding of [213Bi]-BiBPy, the researchers also examined 10 µm-thick brain tissue sections from the mice. They stained the sections with a fluorescent BPy probe (fluorescein-functionalized) and with thioflavin-S, an amyloid stain. Thioflavin-S revealed a dense presence of plaques, particularly in the cortex. The fluorescent BPy probe also stained plaques in the cortex, but less intensely and with more off-site binding. This finding highlights the need to investigate alternative targeting vectors to reduce white-matter binding.

The researchers conclude that TAT can significantly reduce amyloid-beta aggregates in vitro, paving the way for studies in live animals and eventually in humans. As such, they plan to start in vivo testing of TAT later this year.

“Initially, we will be looking at the biodistribution, ability to cross the blood–brain barrier, immune response to treatment and effects on plaque concentrations,” says Mastren. “If successful, we hope to follow up with testing cognitive response to treatment.”

The research is described in the Journal of Nuclear Medicine.

The post Could targeted alpha therapy help treat Alzheimer’s disease? appeared first on Physics World.

]]>
Research update Researchers demonstrate that targeted alpha-particle treatments can reduce the level of amyloid-beta plaque in mouse brain tissues https://physicsworld.com/wp-content/uploads/2024/08/20-08-24-Mastren-lab.jpg newsletter1
Multiple molecular hexaquarks are predicted by theoretical study https://physicsworld.com/a/multiple-molecular-hexaquarks-are-predicted-by-theoretical-study/ Mon, 19 Aug 2024 15:13:07 +0000 https://physicsworld.com/?p=116341 Exotic hadrons comprising six quarks could be observed in future experiments

The post Multiple molecular hexaquarks are predicted by theoretical study appeared first on Physics World.

]]>
Two types of hexaquark

Multiple hexaquarks – strongly interacting hadronic particles comprising six quarks – are likely to exist, according to a new theoretical study by four physicists in China and Germany. The hypothetical particles they considered contained strange (s) and charm (c) quarks. These are both heavy quarks whose presence usually makes hadrons very short-lived and difficult to study experimentally. However, evidence from accelerator experiments has already hinted at the existence of such hexaquarks, leading the team to believe that future experiments at facilities like the Large Hadron Collider (LHC) could validate their predictions.

“Hexaquarks are a type of exotic hadron, distinct from the more familiar baryons (which contain three quarks, like protons and neutrons) and mesons (which contain a quark-antiquark pair),” explains Bo Wang of Hebei University who collaborated on the research. “In general, there are two types of hexaquark states: one where six quarks are confined within a compact hadron, and another that consists of a molecular-like structure formed by two baryons [see figure]. Our [research] focuses on the latter type.”

In molecule-like hexaquarks, the constituent baryons are expected to be not as tightly bound by the strong interaction as the quarks within each baryon. This makes these hexaquarks particularly interesting for studying new aspects of the strong interaction that binds quarks together. This could help physicists better understand quantum chromodynamics – which is the theory that describes the strong interaction and is enormously challenging to implement in calculations.

In their new work, Wang and colleagues employed a combination of techniques used in previous hadron studies, incorporating specific parameters related to the strong interaction that were determined from earlier research on other exotic hadrons, such as tetraquarks (four-quark states) and pentaquarks (five-quark states).

Bag of quarks

“It can be easily inferred within our model that if the molecular tetraquarks and pentaquarks exist, then the molecular hexaquarks must also exist,” said Wang. “Experimental searches for these hexaquark states will help reveal whether nature prefers to construct higher-level structural units, namely hadronic molecular states, or whether it merely favours putting the quarks into a bag, meaning compact multiquark states.”

By applying a range of sophisticated techniques, the scientists were able to calculate the hexaquarks’ masses and lifetimes, which are among the most important parameters of elementary particles and play a primary role in their identification in experiments.

“We have developed a method that combines effective field theory and the quark model to describe the residual strong interactions between quarks,” explained Wang. “The parameters are determined using well-measured states, such as the Pc pentaquarks and the tetraquarks X(3872) and Zc(3900). Finally, the mass spectrum of the molecular-like hexaquark states is determined by solving the Lippmann-Schwinger equation.”

Using this approach, Wang and his colleagues explored various potential hexaquark configurations, all of which included not only the lighter up (u) and down (d) quarks found in protons and neutrons, but also the much heavier s and charm c quarks. Their theoretical models encompassed hexaquarks made from two identical baryons, a baryon paired with its antiparticle, and combinations of two different baryons.

Wealth of testable predictions

While only non-molecular hexaquark candidates have been observed experimentally so far, Wang a colleagues offer a wealth of testable predictions about the subtle properties of strong interactions, making these findings particularly significant.

“Several candidates for molecular-type tetraquarks and pentaquarks have been observed experimentally, notably by collaborations such as the LHCb, BESIII, and Belle, but no candidates for molecular-type hexaquark states containing heavy quarks have yet been found.” said Wang.

The researchers were also able to compare their findings with results from other methods, such as lattice quantum chromodynamics, where space is represented by a finite grid, enabling detailed calculations. In all cases where comparisons were possible, the results were consistent, lending further credibility to the team’s conclusions. However, only experimental evidence can provide definitive proof, and the researchers are optimistic that such confirmation is not far off.

“As future collider experiments are upgraded and established, they will undoubtedly generate a wealth of data for the study of hadronic physics,” concluded Wang. “Research into hadronic molecular states is currently one of the most vibrant areas of inquiry and is expected to remain so for the foreseeable future.”

“Our aspiration is to develop a theoretical framework that comprehensively describes the residual strong interactions, unifying the nuclear forces and interactions between heavy-flavour hadrons under a consistent model and set of parameters. This holds profound significance for our understanding of strong interactions, making the investigation of various properties of hadronic molecular states an excellent entry point for this endeavour.”

The research is described in Physical Review D.

The post Multiple molecular hexaquarks are predicted by theoretical study appeared first on Physics World.

]]>
Research update Exotic hadrons comprising six quarks could be observed in future experiments https://physicsworld.com/wp-content/uploads/2024/08/19-4-24-hexaquarks-list.jpg newsletter1
Quantum dot liquid scintillator could revolutionize neutrino detection https://physicsworld.com/a/quantum-dot-liquid-scintillator-could-revolutionize-neutrino-detection/ Mon, 19 Aug 2024 12:00:21 +0000 https://physicsworld.com/?p=116324 A new type of water-based scintillator made from quantum dots could make neutrino detectors safer and cheaper

The post Quantum dot liquid scintillator could revolutionize neutrino detection appeared first on Physics World.

]]>
Neutrino detectors contain up to tens of thousands of tonnes of liquid scintillator that emits a flash of light whenever it interacts with a neutrino. Such scintillators are typically organic compounds dissolved in organic solvents, so are toxic and highly flammable. By contrast, the water-based quantum dot liquid scintillator developed by a team headed up at King’s College London (KCL) in the UK, is non-toxic and non-flammable – making it less hazardous to work with, as well as more environmentally friendly.

Quantum dots (QDs) are tiny semiconductor crystals that confine electrons and behave like artificial atoms when absorbing and emitting light. The new scintillator contains commercially available 6.4 nm-diameter QDs – optimized to emit the blue light wavelengths preferentially detected by particle physics photon sensors – which the researchers dissolved in the organic solvent toluene before mixing with water and a stabilizing agent of oleic acid molecules.

This mixture was then “agitated to create an emulsion, similar to shaking a bottle of salad dressing to mix oil and vinegar,” explains Aliaksandra Rakovich, who co-led the research along with Teppei Katori. Finally, after settling, the water and oil phases separated and the water phase – now containing the QDs – was further diluted with water to reach the correct concentration for detecting particles such as neutrinos.

As detailed in their recent Journal of Instrumentation paper, the researchers measured the light emitted from a small sample of their liquid scintillator while cosmic rays (atmospheric muons) passed through it. This revealed a high scintillation yield, comparable to that from existing scintillators. The absorbance and emission spectra also remained stable over two years: an essential quality for neutrino experiments, which typically take several years to acquire data.

“The potential for our new scintillator is huge because quantum dots can have so many different types of core and different sizes, so you can choose all kinds of absorption and emission spectra,” says Katori, whose current work includes helping to design the Japan-based international Hyper-Kamiokande neutrino experiment due to start operating in 2027.

Katori hopes that within 5–10 years the new scintillator could not only replace those used in large-scale detectors for dark matter, neutrons or neutrinos, but could also form the basis for desktop-sized generic radiation sensors. It could also help monitor the neutrino spectrum close to the reactor core in nuclear power facilities: this spectrum alters if plutonium is being illegally extracted.

Next, the researchers aim to “develop methods for large-scale synthesis of QDs directly in water”, says Rakovich, adding that this will include removing cadmium and other toxic elements to “reduce the ecological footprint even further”. They also intend to carry out quantitative testing and optimization of stability, safety and performance in increasingly larger samples of their scintillator while under neutrino bombardment over long time scales.

Alex Himmel, a scientist at Fermilab in the USA, who was not involved in the research study, says that he finds this new scintillator promising. “For some time there has been substantial interest in making water-based liquid scintillators which have advantages in terms of safety and cost,” explains Himmel, who is co-spokesperson for Fermilab’s NOvA neutrino experiment, which currently uses an organic liquid scintillator.

“Safety is always a top concern when building particle physics experiments, both for the obvious reason that we don’t want anyone to get hurt, and because potentially dangerous materials typically require costly safety measures,” says Himmel. “If the materials themselves are less hazardous, it makes the experiments easier and cheaper to build and operate.”

Himmel says that the KCL researchers “estimate 4000 photons per MeV from their test sample”, noting that “our experiment operates today at similar light yields”. But he cautions that for this new liquid scintillator to be adopted by end-users it must “be produced cost-effectively at large scales and show a light yield that is stable over time”.

The post Quantum dot liquid scintillator could revolutionize neutrino detection appeared first on Physics World.

]]>
Research update A new type of water-based scintillator made from quantum dots could make neutrino detectors safer and cheaper https://physicsworld.com/wp-content/uploads/2024/08/19-08-24-KCL-QD-scintillator-experiment.jpg newsletter1
How ‘pop Newton’ can help inspire the next generation https://physicsworld.com/a/how-pop-newton-can-help-inspire-the-next-generation/ Mon, 19 Aug 2024 10:00:37 +0000 https://physicsworld.com/?p=115817 Richard Easther and Frank Wang argue that a "Newton first" approach can be better for undergraduates than focusing solely on "modern physics"

The post How ‘pop Newton’ can help inspire the next generation appeared first on Physics World.

]]>
The top two physics books on Goodreads are Stephen Hawking’s A Brief History of Time and Brian Greene’s The Elegant Universe. Both tomes focus on the quest for a “theory of everything” – physics so advanced it is not yet discovered. Much of the “shop window” of popular physics in bookshops is filled with ideas whose bewildering complexity underwrites their allure – strings, extra dimensions or multiverse cosmology. This is in contrast to music or art, where the classics tend to be more popular than avant-garde compositions.

For teachers and communicators of physics, it is easy to key into this fascination for novel ideas. After all, students are often more attracted to quantum mechanics than thermodynamics. Yet while relativity and quantum mechanics are classed as “modern physics” they are anything but. The decadus mirabilis in which quantum mechanics bloomed is a century old. The work of Erwin Schrödinger and Werner Heisenberg is now closer to Faraday’s discovery of electromagnetic induction than to the present day.

In the classroom, physics often gives far more space than other sciences to centuries-old ideas. As physicists, we know that new developments largely embrace and extend existing ideas. Indeed, about a third of most undergraduate first-year physics textbooks – statics, dynamics, circular motion and waves – are firmly “Newtonian”. But with a focus on new ideas, it can be a struggle to maintain students’ interest for the full depth of physics’ repertoire.

This need to make physics more appealing for newcomers was on our minds when we recently refreshed the University of Auckland’s physics curriculum. Beyond revamping the delivery of core material, we also challenged ourselves to create a “pop Newton” course that presents physics as a coherent whole and is open to any undergraduate, not just physics students.

The course treads similar ground to the well-known book – and widely taught course – Physics For Poets by Robert March, which is a breezy survey from Newton through to the Standard Model of particle physics, using only simple algebra. However, simply passing high-school algebra does not guarantee the fluency needed to draw insight from algebraic arguments.

Instead, we decided to work with metaphor and visualization, as happens already, for example, when describing black-hole mergers in an introductory astronomy course. There are many strategies to explain the nature of space–time without resorting to tensor calculus. Yet the simplicity of Newtonian mechanics seems to have prevented the development of similar explanatory tools when it comes to more everyday physics.

This was not so much physics for poets as it was physics via poetry.

From Newton to the LHC

Our approach began with two-body interactions on toy air-hockey tables. Students videoed collisions between plastic pucks and replicated them in a pre-programmed Javascript simulator that runs in a web browser. We explained that the simulator implemented Newton’s laws: nothing changes how it is moving unless it is pushed; the more you push the bigger the change, but the bigger the object the smaller the change; and if you push on something, it pushes back.

This exercise opened the door to discussions on a huge range of phenomena without using algebra, much less calculus. For example, does neutron decay make sense if it leaves only an electron and a proton? (Answer: it doesn’t.) Students could put many particles into the simulator and watch as their speeds take on the Maxwell–Boltzman distribution, showing the genesis of statistical mechanics. Mix one big particle and many little particles and then hide the little particles on screen and Brownian motion appears. This allowed us to replicate the arguments that led to the explanation of atoms.

Over a few weeks, we drew a conceptual line from the simplest two-body collisions through to CERN’s Large Hadron Collider. The emergent properties of many-body systems then led to a discussion of reductionist explanations of complex phenomena. We looked at materials science (including quantum mechanics), climate dynamics and infectious-disease transmission. We drew on the expertise of Auckland physicists who work on climate and who made key contributions to New Zealand’s COVID-modelling efforts. In that way, we also showcased the range of problems addressed via physics and its methods. Ironically, post-pandemic staffing constraints have made it difficult to replicate this pedagogical experiment. However, even as a one-off, it showed the clear value in an approach that foregrounds the coherence and historical sweep of physics.

Deep progress in physics is measured on a clock that ticks in centuries

We also wanted to avoid giving the impression that physics is “solved” at a fundamental level. Firstly, its applications continue to reshape the world. Quantum technologies are cutting edge, even though quantum mechanics existed alongside the Model T Ford and we can now illustrate Newton’s laws of motion with spacecraft as well as cannonballs. But it is true that a large majority of physicists are applying physics to new problems, rather than seeking “new physics”.

Deep progress in physics is measured on a clock that ticks in centuries. We believe we must highlight the long narrative arc of the field, which is a profound story of its own. While 95% of the universe is currently unknown to physics, the story comes full circle when we recall that the dark material in the universe is revealed in part via apparent inconsistencies with Newtonian mechanics on galactic scales.

And one last conclusion: a lab session with a roomful of students playing air hockey is noisy fun.

The post How ‘pop Newton’ can help inspire the next generation appeared first on Physics World.

]]>
Opinion and reviews Richard Easther and Frank Wang argue that a "Newton first" approach can be better for undergraduates than focusing solely on "modern physics" https://physicsworld.com/wp-content/uploads/2024/08/24-08-Forum-Pop-Newton-Air-hockey-187193696-shutterstock_Fer-Gregory.jpg newsletter
Physicists reveal the role of ‘magic’ in quantum computational power https://physicsworld.com/a/physicists-reveal-the-role-of-magic-in-quantum-computational-power/ Mon, 19 Aug 2024 09:19:17 +0000 https://physicsworld.com/?p=116250 Entanglement and magic interact in ways that impact quantum algorithms and physical systems

The post Physicists reveal the role of ‘magic’ in quantum computational power appeared first on Physics World.

]]>
Cartoon showing a landscape divided between entanglement and magic. The entanglement part of the landscape is green, with gently rolling terrain, and a computer hovering above it with a green tick mark. The magic part is filled with spiky black mountains and fiery red pits, and the computer hovering above it is in flames

Entanglement is a fundamental concept in quantum information theory and is often regarded as a key indicator of a system’s “quantumness”. However, the relationship between entanglement and quantum computational power is not straightforward. In a study posted on the arXiv preprint server, physicists in Germany, Italy and the US shed light on this complex relationship by exploring the role of a property known as “magic” in entanglement theory. The study’s results have broad implications for various fields, including quantum error correction, many-body physics and quantum chaos.

Traditionally, the more entangled your quantum bits (qubits) are, the more you can do with your quantum computer. However, this belief – that higher entanglement in a quantum state is associated with greater computational advantage – is challenged by the fact that certain highly entangled states can be efficiently simulated on classical computers and do not offer the same computational power as other quantum states. These states are often generated by classically simulable circuits known as Clifford circuits.

To address this discrepancy, researchers introduced the concept of “magic”. Magic quantifies the non-Clifford resources necessary to prepare a quantum state and thus serves as a more nuanced measure of a state’s quantum computational power.

Studying entanglement and magic

In the new study, Andi Gu, a PhD student at Harvard University, together with postdoctoral researchers Salvatore F E Oliviero of Scuola Normale Superiore and CNR in Pisa and Lorenzo Leone of the Dahlem Center for Complex Quantum Systems in Berlin, approach the study of entanglement and magic by examining operational tasks such as entanglement estimation, distillation and dilution.

The first of these tasks quantifies the degree of entanglement in a quantum system. The goal of entanglement distillation, meanwhile, is to use LOCC (local operations and classical communication) to transform a quantum state into as many Bell pairs as possible. Entanglement dilution, as its name suggests, is the converse of this: it aims to convert copies of the Bell state into less entangled states using LOCC with high fidelity.

Gu and colleagues find a computational phase separation between quantum states, dividing them into two distinct regimes: the entanglement-dominated (ED) and magic-dominated (MD) phases. In the former, entanglement significantly surpasses magic, and quantum states allow for efficient quantum algorithms to perform various entanglement-related tasks. For instance, entanglement entropy can be estimated with negligible error, and efficient protocols exist for entanglement manipulation (that is, distillation and dilution). The research team also propose efficient ways to detect entanglement in noisy ED states, showing their surprising resilience compared to traditional states.

In contrast, states in the MD phase have a higher degree of magic relative to entanglement. This makes entanglement-related tasks computationally intractable, highlighting the significant computational overhead introduced by magic and requiring more advanced approaches. “We can always handle entanglement tasks efficiently for ED states, but for MD states, it’s a mixed bag – while there could be something that works, sometimes nothing works at all,” Guo, Leone and Oliviero tell Physics World.

Practical implications

As for the significance of this separation, the trio say that in quantum error correction, understanding the interplay between entanglement and magic can improve the design of error-correcting codes that protect quantum information from decoherence (a loss of quantumness) and other errors. For instance, topological error-correcting codes that rely on the robustness of entanglement, such as those in three-dimensional topological models, benefit from the insights provided by the ED-MD phase distinction.

The team’s proposed framework also offers theoretical explanations for numerical observations in hybrid quantum circuits (random circuits interspersed with measurements), where transitions between phases are observed. These findings improve our understanding of the dynamics of entanglement in many-body systems and demonstrate that entanglement of states within the ED phase is robust under noise.

The trio say that next steps for this research could take several directions. “First, we aim to explore whether ED states, characterized by efficient entanglement manipulation even with many non-Clifford gates, can be efficiently classically simulated, or if other quantum tasks can be performed efficiently for these states,” they say. Another avenue would be to extend the framework to continuous variable systems, such as bosons and fermions.

The post Physicists reveal the role of ‘magic’ in quantum computational power appeared first on Physics World.

]]>
Research update Entanglement and magic interact in ways that impact quantum algorithms and physical systems https://physicsworld.com/wp-content/uploads/2024/08/15-08-2024-LISTING-Entanglement-vs-magic.png newsletter1
Heisenberg gets ‘let off the hook’ in new historical drama based on the Farm Hall transcripts https://physicsworld.com/a/heisenberg-gets-let-off-the-hook-in-new-historical-drama-based-on-the-farm-hall-transcripts/ Fri, 16 Aug 2024 09:56:58 +0000 https://physicsworld.com/?p=116246 Philip Ball reviews Farm Hall by Katherine Moar at the Theatre Royal Haymarket, London, which runs until 31 August 2024

The post Heisenberg gets ‘let off the hook’ in new historical drama based on the Farm Hall transcripts appeared first on Physics World.

]]>
As the Second World War reached its endgame in Europe in 1945, Allied forces advancing towards Berlin raced to round up German scientists who’d worked on the Nazis’ “Uranium Project” to harness nuclear fission. Code-named the Alsos mission, it picked up the likes of Max von Laue, Otto Hahn (who’d led the experiments to discover fission in 1938), Carl von Weizsäcker, and the head of the uranium work Werner Heisenberg.

The Allied military were eager to prevent those eminent scientists from falling into Russian hands. But the Americans leading Alsos had little idea of what to do with these researchers, despite the mission having Dutch physicist Samuel Goudsmit as its scientific leader. The British forces, however, offered to take them off their hands, flying the scientists to England where they were interred in a country house in Cambridgeshire called Farm Hall.

Held for six months from July 1945, the scientists were well provided for and free to talk among themselves. Unbeknownst to them, however, British intelligence had bugged the house to assess if these men could be trusted to co-operate in the post-war reconstruction of Germany. Heisenberg, as arrogant and superior as ever, dismissed the idea of any such eavesdropping. “I don’t think they know the real Gestapo methods”, he said. “They’re a bit old-fashioned in that respect.”

We know precisely what the interned scientists discussed because full transcriptions of their conversations have been available for more than three decades, first appearing in the book Operation Epsilon (IOP Publishing 1993). It’s an episode that cries out for dramatization. You have the scientists’ anxieties about what they faced next, the unfolding of bitter rivalries and blame games, and the denouement of the Hiroshima and Nagasaki bombs, news of which was met with horror and disbelief. What’s more, Farm Hall was already virtually a theatrical stage set.

Farm Hall

No wonder, then, that the production of Farm Hall at the Theatre Royal Haymarket in London, written by playwright and historian Katherine Moar, has several precedents. The events were first dramatized by David Sington in BBC TV’s Horizon programme in 1992 and later formed the subject of a 2010 BBC radio play. There’s also been Operation Epsilon – a 2013 play by US playwright Alan Brody that ran at the Southwark Playhouse in London only last autumn.

Moar’s own play premiered last year at London’s Jermyn Street Theatre before touring and now returning to the grander Haymarket. The issues were also searchingly explored in Michael Frayn’s Copenhagen (1998), which depicts the meeting of Heisenberg with Niels Bohr in Nazi-occupied Denmark in 1941. Those issues include the culpability of the scientists in building an atomic bomb for Hitler and the wider moral tensions between science, governance and warfare.

The Farm Hall transcripts are something of a straitjacket for the dramatist, since in effect the script is already written

The Farm Hall transcripts are a remarkable resource for historians trying to deduce the real intentions and achievements of the German physicists who worked on the Uranium Project. But they are something of a straitjacket for the dramatist, since in effect the script is already written. While Moar’s own dialogue is sprightly, she is thus not really able to shed new light on the events.

The roles given to the German scientists do not differ much from those of previous dramatizations. Heisenberg loftily considers himself the intellectual leader and the future hope for German science. Von Laue (who did not work on uranium) is scornful of the others’ attempts to justify their support for a depraved regime. Hahn feels personally responsible for the horrors of Hiroshima.

Kurt Diebner, who led a rival uranium research team and was a Nazi party member, clashes with Heisenberg, while the younger Erich Bagge frets about having also joined the party for the sake of career advancement. Von Weizsäcker is the jovial socialite in Moar’s version, working with his hero Heisenberg to construct an extenuating story for posterity.

The key question is why the Germans, with so much expertise in nuclear science, failed to get close to making a bomb, or even a self-sustaining nuclear pile (like the one built by Enrico Fermi at Chicago in 1942). Here historians are divided. Heisenberg, as Moar acknowledges, sought to find a story that absolved the Germans of moral failure while also denying that they got the physics wrong.

At first he refused to believe the announcement of the American bomb, on the grounds that they could not possibly have succeeded where he had failed. However, he later spun a story in which the physicists had cleverly persuaded the Nazis to support the scientific work without overpromising about delivery. Later, Heisenberg even implied that he and others had deliberately falsified the maths to sabotage the bomb project.

The latter idea was popularized in the journalist Thomas Powers’ 1993 book Heisenberg’s War: the Secret History of the German Bomb, which influenced Frayn’s play but for which there is no firm documentary evidence. In fact, the US historian Mark Walker has called that version of events “tragically absurd”. To my mind, Moar also lets Heisenberg off the hook too easily.

In a slightly arch play on the uncertainty principle – a motif that Frayn also used – she has Heisenberg deliver a final soliloquy in which he answers the question “Did you try to build a bomb?” with: “On some days yes. On others, no.” I find it more probable that the German scientists lacked the conviction that they could achieve their goal soon enough to make a difference to the war. Not having argued the case strongly, they were simply not given the resources to make much progress.

What matters more in retrospect is that so few of the scientists, including especially Heisenberg and Weizsäcker but also Hahn, took responsibility for what they had done under the Third Reich. Perhaps the only researcher who did, ironically, was Lise Meitner, who famously interpreted Hahn’s results as nuclear fission after she had fled Berlin in 1938 because of her Jewish heritage.

“You did not want to see it”, she later wrote to Hahn. “It was too inconvenient.”

The post Heisenberg gets ‘let off the hook’ in new historical drama based on the Farm Hall transcripts appeared first on Physics World.

]]>
Opinion and reviews Philip Ball reviews Farm Hall by Katherine Moar at the Theatre Royal Haymarket, London, which runs until 31 August 2024 https://physicsworld.com/wp-content/uploads/2024/08/2024-08-Ball-Fram-Hall-photo-cast.jpg newsletter
Cryo-electron tomography reveals structure of Alzheimer’s plaques and tangles in the brain https://physicsworld.com/a/cryo-electron-tomography-reveals-structure-of-alzheimers-plaques-and-tangles-in-the-brain/ Fri, 16 Aug 2024 08:30:00 +0000 https://physicsworld.com/?p=116276 Researchers determine 3D architecture of the amyloid-beta and tau proteins that aggregate in the brain in Alzheimer’s disease

The post Cryo-electron tomography reveals structure of Alzheimer’s plaques and tangles in the brain appeared first on Physics World.

]]>
Imaging Alzheimer’s disease in the brain

Alzheimer’s disease is characterized by the abnormal formation of amyloid-beta peptide plaques and tau tangles in the brain. Although initially identified in 1907, the molecular structures and arrangements of these protein aggregates remain unclear. Now, a research team headed up at the University of Leeds has determined the 3D architecture of these molecules within a human brain for the first time, reporting the findings in Nature.

The researchers used cryo-electron tomography (cryo-ET) techniques to create 3D maps of tissues in a postmortem Alzheimer’s disease donor brain. They revealed the molecular structure of tau in brain tissue and the arrangement of amyloids, and identified new structures entangled within these pathologies.

“[These] detailed 3D images of brain tissue…for the first time bring clarity to the in situ organization of amyloid-beta and tau filament,” states Sjors Scheres, of the MRC Laboratory of Molecular Biology, in an accompanying commentary article.

Scheres explains that the research team had to overcome several major hurdles, including slicing thin enough brain tissue samples for electrons to pass through, freezing hydrated samples fast enough to prevent crystallization that can interfere with the cryo-ET imaging, and identifying relevant areas containing amyloid-beta and tau tangles to image.

For the study, lead author René Frank and colleagues examined freeze-thawed postmortem brain samples of the mid-temporal gyrus from an Alzheimer’s disease donor and a healthy donor. To identify areas of amyloid-beta and tau, they thawed the samples, sliced the brain tissues into 100–200 μm slices and added methoxy-X04 (a fluorescent dye that binds amyloid), before rapidly refreezing the samples.

The researchers then performed cryo-ET on 70-nm-thick tissue cryo-sections from a dye-labelled amyloid-beta plaque and a location enriched in tau tangles and threads. Using cryo-fluorescence microscopy to guide the cryo-ET, they acquired images from different angles and used these to computationally reconstruct a tomographic volume. They collected 42 tomograms in and around regions of amyloid-beta, 25 tomograms in regions containing tau tangles, plus 64 tomograms from the healthy brain tissue as controls.

To obtain higher-resolution structural information, the researchers picked subvolumes containing filaments for alignment and averaging. Subtomogram averaging of 136 tau filaments from a single tomographic volume generated the in situ structure of tau with 8.7 Å resolution.

The researchers report that the amyloid-beta plaques had a lattice-like architecture of amyloid fibrils interspersed with non-amyloid constituents, including extracellular vesicles, fragments of lipid membranes and unidentifiable cuboidal particles. Because these non-amyloid constituents were not present in healthy brain samples, they suggest that they are also a component of Alzheimer’s pathology, and may be related to amyloid-beta biogenesis or a cellular response to amyloid.

The amyloid-beta plaques also contained branched amyloid fibrils and protofilament-like rods. The team speculates that these branched fibrils and rods may contribute to the high local concentration of amyloid-beta that characterizes plaques.

Frank and colleagues also identified tau clusters within cells and in extracellular locations. The tau filaments were unbranched and arranged in parallel clusters. They observed both paired helical filaments and straight filaments, which did not mix randomly with each other, but tended to be close to filaments of the same type, often arranged with the same polarity. The researchers suggest that the non-random arrangement may be caused by interactions between filaments or growth in parallel from neighbouring focal points.

The collaboration – also including researchers at Amsterdam UMC, the University of Cambridge and Zeiss Microscopy – represents new efforts by structural biologists to study proteins directly within cells and tissues, to determine how proteins work together and affect one another, particularly in human cells and tissues affected by disease.

“The approaches for obtaining 3D molecular architectures and structures of human tissues with cryo-CLEM [cryo-correlated light and EM]-guided cryo-ET in Alzheimer’s disease sets the ground for interrogating other common dementias and movement disorders,” says Frank. “These include frontotemporal dementia, amyotrophic lateral sclerosis (motor neuron disease) and Parkinson’s disease.”

The post Cryo-electron tomography reveals structure of Alzheimer’s plaques and tangles in the brain appeared first on Physics World.

]]>
Research update Researchers determine 3D architecture of the amyloid-beta and tau proteins that aggregate in the brain in Alzheimer’s disease https://physicsworld.com/wp-content/uploads/2024/08/16-08-24-Alzheimer-imaging-featured.jpg
Quantum sensors monitor brain development in children https://physicsworld.com/a/quantum-sensors-monitor-brain-development-in-children/ Thu, 15 Aug 2024 15:19:52 +0000 https://physicsworld.com/?p=116291 This podcast explores how quantum technologies are revolutionizing medicine

The post Quantum sensors monitor brain development in children appeared first on Physics World.

]]>
Margot Taylor – director of functional neuroimaging at Toronto’s Hospital for Sick Children – is our first guest in this podcast. She explains how she uses optically-pumped magnetometers (OPMs) to do magnetoencephalography (MEG) studies of brain development in children.

An OPM uses quantum spins within an atomic gas to detect the tiny magnetic fields produced by the brain. Unlike other sensors used for MEG, which must be kept at cryogenic temperatures, OPMs can be deployed at room temperature in a simple helmet that puts the sensors very close to the scalp.

The OPM-MEG helmets are made by Cerca Magnetics and the UK-based company’s managing director joins the conversation to explain how the technology works. David Woolger also talks about the success the company has enjoyed since its inception in 2020.

Our final guest in this podcast is Stuart Nicol, who is chief investment officer at Quantum Exponential – a UK-based company that invests in quantum start-ups. He gives his perspective on the medical sector, talks about a company called Siloton that is making a crucial eye-imaging technology more accessible.

The post Quantum sensors monitor brain development in children appeared first on Physics World.

]]>
Podcasts This podcast explores how quantum technologies are revolutionizing medicine https://physicsworld.com/wp-content/uploads/2024/08/15-8-24-Cerca-helmet-list.jpg newsletter
Fermilab is ‘doomed’ without management overhaul claims whistleblower report https://physicsworld.com/a/fermilab-is-doomed-without-management-overhaul-claims-whistleblower-report/ Thu, 15 Aug 2024 11:00:18 +0000 https://physicsworld.com/?p=116236 A group of anonymous whistleblowers say that Fermilab is in 'crisis' and needs a management shake-up

The post Fermilab is ‘doomed’ without management overhaul claims whistleblower report appeared first on Physics World.

]]>
A group of self-styled “whistleblowers” at Fermilab, the US’s premier particle-physics facility, is claiming that the lab is in “crisis” and that “without a complete [management] shake-up” it is “doomed”. Published in the form of a 113-page “white paper” on the arXiv pre-print server, the criticism comes as the US Department of Energy (DOE), which funds Fermilab, is preparing to announce a new contractor to manage the day-to-day running of the lab.

The paper has been written by disgruntled staff members and visiting experimentalists, who in December 2023 set up a think tank to help Fermilab overcome what they called its “mission and physics impasses”. The authors, who are anonymous, say they have based their report on interviews and surveys of employees at the lab. It has, however, been formally signed by Giorgio Bellettini, who worked at Fermilab in the 1980s and 2010s, and neutrino physicist William Barletta from the Massachusetts Institute of Technology.

A Fermilab spokesperson told Physics World that the lab’s leadership is taking “seriously” the issues raised in the report and the current dissatisfaction among some staff. “They are assessing the situation and working to improve staff satisfaction,” the spokesperson says, adding that current director Lia Merminga conducted a staff climate survey when she took up office in 2022. That resulted in “some of the most pressing issues” being addressed and led to a “culture of excellence initiative” being established that will begin in full next year. Its goal is a “measurable improvement” in staff satisfaction within a year.

Limited operations

With more than 2000 staff, Fermilab has been managed since 2007 by Fermi Research Alliance (FRA) – a group that combines the University of Chicago and the Universities Research Association (URA). Serving the DOE’s Office of Science, the group’s remit is to guide the scientific direction of the lab. With the Tevatron proton-antiproton collider having been decommissioned in the 2010s, Fermilab is now repositioning itself as a leader in neutrino science.

The lab’s accelerator complex is currently undergoing a major upgrade for the $1.5bn Long-Baseline Neutrino Facility, which will study the properties of neutrinos in unprecedented detail and examine the differences in behaviour between neutrinos and antineutrinos. It will do so by sending neutrinos towards the Deep Underground Neutrino Experiment (DUNE) in a former gold mine in South Dakota some 1300 km away.

Hopefully, the [report] will raise an aggressive discussion within DOE and the lab management leading to substantial improvements in how the lab programme is presently conceived and performed

Giorgio Bellettini

Despite progress on this front, the lab has recently faced a number of challenges. In a 2021 assessment, the DOE gave Fermilab an overall mark of “B”, which fell below the required “B+”. Meanwhile DUNE gained only a “C”, mainly owing to delays and cost overruns. Complaints also emerged in 2022 over Fermilab continuing to restrict access to its campus for scientists and members of the public, despite COVID-19, which had prompted the original restrictions, having become less of a concern.

The [whistleblower] document asserts various challenges at Fermilab, some of which are inaccurate, and others of which [the Fermi Research Alliance] has been working hard to address for some time

Lia Merminga

Then in mid-June, Fermilab’s leadership told an all-hands meeting that the lab would close a significant part of its operations between 26 August and 8 September to reduce a budgetary shortfall. During that time staff would have to take their holidays. Following protests over the decision and “through the active engagement of DOE and FRA”, Fermilab later announced that, rather than closing, it would instead undergo “a limited operations period” for maintenance and repairs during the week of 26 August.

”The majority of Fermilab staff will be on leave and the lab will be closed to the public”, bosses declared.

“Too many deficiencies”

In the new whistleblower report, the group claims there are “too many deficiencies in the culture and behavioural areas” at Fermilab. They point, for example, to the lab’s dismissal of an early-career researcher in 2023 who had alleged sexual assault in 2018, and raised several cover-ups by management of dangerous behaviour. The report also highlights a case of guns being brought onto Fermilab’s campus in 2023; a male employee’s attack on a female colleague using an industrial vehicle in 2022; and retaliation against an employee who had predicted and warned management about the failures of beryllium windows.

The report in addition accuses FRA of lacking state-of-the art processes for business, finances and procurement. This “management ineffectiveness”, the whistleblowers charge, has caused a series of “self-inflicted problems” including unfilled positions in important scientific and administrative leadership positions; “serious” budget overruns and delays in key experiments; and several administrative obstacles that slow down or even stop experiments’ scientific productivity. The consequence, the report concludes, is “budget insolvency, with the lab being very much in the red”.

The whistleblowers also say they recently carried out a survey of Fermilab staff, which supposedly found that “a large fraction” are “unhappy” with management and are “desperately looking for change”. The survey, the authors claim, also revealed “poor communication between management and employees, and a decline of trust in management”.

“After so many years at Fermilab, I have developed a deep sentimental involvement with the laboratory, and I sense a diffused lack of confidence in our future,” writes Bellettini in a foreword to the report. “The data of the past 15 years show that responding to demands for a change by delaying any incisive action is not productive. It is not leading to a rousing vision for [high energy physics] in the United States.”

Calling for change

Although Merminga was unavailable for an interview with Physics World for this story, in a message to Fermilab’s employees on 29 July, which has been seen by Physics World, she stated that “the [whistleblower] document asserts various challenges at Fermilab, some of which are inaccurate, and others of which FRA has been working hard to address for some time”. Merminga added that she plans to discuss the issues with staff and then “communicate some of the progress we are making.”

The Fermilab spokesperson also states that access to the Fermilab site for both staff and members of the public “has improved significantly over the last year with updated and streamlined processes” in a bid to improve confidence and trust in the lab.

The issues at Fermilab are, however, also hindering the DOE, which earlier this year called for bids on the contract to operate the lab. The University of Chicago and the URA have submitted a contract bid together with other partners. Associated Universities, Inc., which runs the US-based 100 m-diameter Green Bank Telescope and the Atacama Large Millimeter/submillimeter Array in Chile, has also thrown its hat in the ring. The DOE says it will announce the winner of the contract by 30 September.

The whistleblowers, however, are calling for more than just a change of contractor. They say management should replace Merminga given that she has “[failed] to respond effectively to setbacks [and has] only made things worse”. “A new management team, one hopes, would be motivated to solve problems and would enjoy a ‘honeymoon period’, enabling them to make positive changes more easily,” the report states.

Bellettini, meanwhile, told Physics World that he has not received a response to the report from Fermilab’s management. “Hopefully, the [report] will raise an aggressive discussion within DOE and the lab management leading to substantial improvements in how the lab programme is presently conceived and performed,” he says. “The time to act is now.”

  • This article was amended on 15 August 2024 to clarify the role of the FRA’s constituent organizations in bidding for the next contract, and to correct the date that Lia Merminga became Fermilab’s director. It was also amended on 19 August 2024 to make clear the report included the views of visiting experimentalists at Fermilab.

The post Fermilab is ‘doomed’ without management overhaul claims whistleblower report appeared first on Physics World.

]]>
News A group of anonymous whistleblowers say that Fermilab is in 'crisis' and needs a management shake-up https://physicsworld.com/wp-content/uploads/2024/08/Fermilab-scaled.jpg
Superconductivity appears in nickelate crystals under pressure https://physicsworld.com/a/superconductivity-appears-in-nickelate-crystals-under-pressure/ Thu, 15 Aug 2024 08:30:48 +0000 https://physicsworld.com/?p=116249 Could nickel-oxide-based compounds be a new class of high-temperature superconductors?

The post Superconductivity appears in nickelate crystals under pressure appeared first on Physics World.

]]>
Diagram showing that as pressure increases, spin-charge order is suppressed and bulk superconductivity emerges in La4Ni3O10−δ

Researchers from Fudan University in Shanghai, China, report that they have discovered high-temperature superconductivity in trilayer single crystals of nickel-oxide materials under high pressure. These materials appear to superconduct in a different way than the better-known copper-oxide superconductors, and the researchers say they could become a new platform for studying high-temperature superconductivity.

Superconductors are materials that conduct electricity without resistance when cooled to below a certain critical transition temperature Tc. The first superconductor to be discovered was solid mercury in 1911, but its transition temperature is only a few degrees above absolute zero, meaning that expensive liquid helium coolant is required to keep it in the superconducting phase. Several other “conventional” superconductors, as they are known, were discovered shortly afterwards, all with similarly low values of Tc.

In the late 1980s, however, physicists discovered a new class of “high-temperature” superconductors that have a Tabove the boiling point of liquid nitrogen (77 K). These “unconventional” superconductors are not metals. Instead, they are insulators containing copper oxides (cuprates). Their existence suggests that superconductivity could persist at even higher temperatures, and perhaps even at room temperature – with huge implications for technologies ranging from electricity transmission lines to magnetic resonance imaging.

Nickel oxides could be good high-temperature superconductors

More recently, researchers identified nickel oxide materials – nickelates – as additional high-temperature superconductors. In 2019, a team at Stanford University in the US observed superconductivity in materials containing an effectively infinite number of periodically repeating planes of nickel and oxygen atoms. Then, in 2023, a team led by Meng Wang of China’s Sun Yat-Sen University detected signs of superconductivity in bilayer lanthanum nickel oxide (La3Ni2O7) at 80 K under a pressure of 14 gigapascals.

In the latest work, researchers led by Jun Zhao say that they have found evidence for superconductivity in a nickelate with the chemical formula La4Ni 3O10−δ (where δ can range from 0 to 0.04). Zhao and colleagues obtained this result by placing crystals of the material into a diamond anvil cell, which is a device that can generate extreme pressures of more than 400 GPa (or 4 x 106 atmospheres) as it squeezes the sample between the flattened tip of two tiny, gem-grade diamond crystals.

Evidence of superconductivity

In a paper published in Nature, the researchers report two pieces of evidence for superconductivity in their sample. The first is zero electrical resistance – that is, a complete disappearance of electrical resistance at a Tc of around 30 K and a pressure of 69 GPa. The second is the Meissner effect, which is the expulsion of a magnetic field.

“Through direct current susceptibility measurements, we detected a significant diamagnetic response, indicating that the material expels magnetic fields,” Zhao tells Physics World. “These measurements also enabled us to determine the superconducting volume fraction (that is, how much of the material is superconducting and whether superconductivity prevails throughout the material or just a small area). We found that it exceeds 80%, which confirms the bulk nature of superconductivity in this compound.”

The behaviour of this nickelate compound differs from that of the cuprate superconductors. For cuprates, Tc depends on the number of copper oxide layers in the material and reaches a maximum for structures comprising three layers. For nickelates, however, Tc appears to decrease as more NiO2 layers are added. This suggests that their superconductivity stems from a different mechanism – perhaps even one that conforms to the standard theory of superconductivity, known as BCS theory after the initials of its discoverers.

According to this theory, mercury and most metallic elements superconduct below their Tc because their fermionic electrons pair up to create bosons called Cooper pairs. This pairing occurs due to interactions between the electrons and phonons, which are quasiparticles arising from vibrations of the material’s crystal lattice. However, this theory usually falls short for high-temperature superconductors, so it is intriguing that it might explain some aspects of nickelate behaviour, Zhao says.

“That the layer-dependent Tc in nickelates is distinct from that observed in cuprates suggests unique interlayer coupling and charge transfer mechanism specific to the former,” says Zhao. “Such a unique trilayer structure provides a good platform to understand the role of this coupling in electron pairing and could allow us to better understand the mechanisms behind superconductivity in general and lead to the development of new superconducting materials and applications.”

A promising class of superconducting materials?

Weiwei Xie, a chemist at Michigan State University, US, who was not involved in this work, says that La4Ni 3O10−δ might indeed be a conventional superconductor and that the new study could help to establish nickel oxides as a promising class of superconducting materials. However, she notes that several recent papers claiming to have observed high temperature superconductivity in a different group of materials – hydrides – were later retracted because their findings could not be reproduced by independent research groups. “These papers are never far from our minds,” she tells Physics World.

In a News and Views article published in Nature, however, Xie strikes a hopeful note. “The (new) report has set the stage for a potentially fruitful path of research that could lead to an end to the controversy surrounding unreliable measurements,” she writes.

For their part, the Fudan University researchers say they now aim to identify other differences between the superconducting mechanisms in the nickelates and cuprates. “We will also be continuing to search for more superconducting nickelates,” Zhao reveals.

The post Superconductivity appears in nickelate crystals under pressure appeared first on Physics World.

]]>
Research update Could nickel-oxide-based compounds be a new class of high-temperature superconductors? https://physicsworld.com/wp-content/uploads/2024/08/superconductivity_315452609_Shutterstock_SIM-VA1.jpg
NIST publishes first set of ‘finalized’ post-quantum encryption standards https://physicsworld.com/a/nist-publishes-first-set-of-finalized-post-quantum-encryption-standards/ Thu, 15 Aug 2024 07:56:46 +0000 https://physicsworld.com/?p=116233 The algorithms are designed to withstand the attack of a quantum computer

The post NIST publishes first set of ‘finalized’ post-quantum encryption standards appeared first on Physics World.

]]>
A set of encryption algorithms that are designed to withstand hacking attempts by a quantum computer has been released by the US National Institute of Standards and Technology (NIST). The algorithms, which should also protect against the increasing threat of AI-based attacks, are the result of an eight-year effort by NIST. They contain the encryption algorithms’ computer code, instructions for how to implement them and details of their intended uses.

Encryption is widely used to protect the contents of electronic information, with encrypted data able to be sent safely across public computer networks because it is unreadable to all but its sender and intended recipient. Encryption tools rely on complex mathematical problems that conventional computers find difficult or impossible to solve. Quantum computers, however, could outperform their classical counterparts and crack current encryption methods.

In 2016 NIST announced an open competition in which researchers were invited to submit algorithms to be considered as a “post-quantum” cryptography (PQC) standard to stymie both conventional and quantum computers.  In 2022 NIST said that four algorithms would be developed further. CRYSTALS-Kyber protects information exchanged across a public network, while CRYSTALS-Dilithium, FALCON and SPHINCS+ concern digital signatures and identity authentication.

The three final algorithms, which have now been released, are ML-KEM, previously known as kyber; ML-DSA (formerly Dilithium); and SLH-DSA (SPHINCS+). NIST says it will release a draft standard for FALCON later this year. “These finalized standards include instructions for incorporating them into products and encryption systems,” says NIST mathematician Dustin Moody, who heads the PQC standardization project. “We encourage system administrators to start integrating them into their systems immediately.”

Duncan Jones, head of cybersecurity at the firm Quantinuum welcomes the development. “[It] represents a crucial first step towards protecting all our data against the threat of a future quantum computer that could decrypt traditionally secure communications,” he says. “On all fronts – from technology to global policy – advancements are causing experts to predict a faster timeline to reaching fault-tolerant quantum computers. The standardization of NIST’s algorithms is a critical milestone in that timeline.”

The post NIST publishes first set of ‘finalized’ post-quantum encryption standards appeared first on Physics World.

]]>
News The algorithms are designed to withstand the attack of a quantum computer https://physicsworld.com/wp-content/uploads/2024/08/quantum-circuit-concept-1206098096-iStock_Quardia.jpg
Atomic clocks on the Moon could create ‘lunar positioning system’ https://physicsworld.com/a/atomic-clocks-on-the-moon-could-create-lunar-positioning-system/ Wed, 14 Aug 2024 14:41:02 +0000 https://physicsworld.com/?p=116261 Lunar time standard would avoid pitfalls of time dilation

The post Atomic clocks on the Moon could create ‘lunar positioning system’ appeared first on Physics World.

]]>
Atomic clocks on the Moon. It might sound like a futuristic concept, but atomic clocks already abound in space. They can be found on Earth-orbiting satellites that provide precision timing for many modern technologies.

The clocks’ primary function is to generate the time signals that are broadcast by satellite navigation systems such as GPS. These signals are also used to time-stamp financial transactions, enable mobile-phone communications and coordinate electricity grids.

But why stop at orbits a mere 20,000 km from Earth’s surface? Should we establish a network of atomic clocks on the Moon? This is the subject of a new paper by two physicists at NIST in Boulder, Colorado – Neil Ashby and Bijunath Patla.

They say that their study was inspired by NASA’s ambitious Artemis programme, which aims to land people on the Moon as early as 2026. The duo points out that navigation and communications on and near the Moon would benefit from a precision time standard. One option is to use a time signal that is broadcast from Earth to the Moon. Another option is to create a lunar time standard using one or more atomic clocks on the Moon, or in lunar orbit.

Faster pace

The problem with using a signal from Earth is that a clock on the Moon runs at a faster pace than a clock on Earth. This time dilation is caused by the difference in gravitational potential at the two locations and is described nicely by Einstein’s general theory of relativity.

Using that theory, the NIST duo calculate that a clock on the Moon will gain about 56 µs per day when compared to a clock on Earth. What’s more, this rate is not constant because of the eccentricity of the Moon’s orbit and the changing tidal effects of solar-system bodies other than the Earth, which would also cause fluctuations in the difference between earthbound and Moon-bound clocks.

Because of these variations, the duo argue that it would be better to create a network of atomic clocks on the surface of the Moon – and in lunar orbit. This would provide a distributed system of lunar time, much like the distributed system that currently exists on Earth.

“It’s like having the entire Moon synchronized to one ‘time zone’ adjusted for the Moon’s gravity, rather than having clocks gradually drift out of sync with Earth’s time,” explains Patla. This could form the basis of a high-precision lunar positioning system. “The goal is to ensure that spacecraft can land within a few metres of their intended destination,” Patla says.

They also calculated the difference in clock rates on Earth and at the four Lagrange points in the Earth–Moon system. These are places where satellites can sit fixed relative to the Earth and Moon. There, clocks would gain a little more than 58 µs per day compared to clocks on Earth.

They conclude that atomic clocks placed on satellites at these Lagrange points could be used as time transfer links between the Earth and Moon.

The research is described in The Astronomical Journal.

The post Atomic clocks on the Moon could create ‘lunar positioning system’ appeared first on Physics World.

]]>
Blog Lunar time standard would avoid pitfalls of time dilation https://physicsworld.com/wp-content/uploads/2024/08/14-8-24-Lunar-positioning-system.jpg
Wearable PET scanner allows brain scans of moving patients https://physicsworld.com/a/wearable-pet-scanner-allows-brain-scans-of-moving-patients/ Wed, 14 Aug 2024 12:00:09 +0000 https://physicsworld.com/?p=116228 A helmet-like upright imaging device has potential to enable previously impossible neuroimaging studies

The post Wearable PET scanner allows brain scans of moving patients appeared first on Physics World.

]]>
Imaging plays a vital role in diagnosing brain disease and disorders, as well as advancing our understanding of how the human brain works. Existing brain imaging modalities, however, usually require the subject to lie flat and motionless – precluding use in people who cannot remain still or studies of the brain in motion.

To address these limitations, neuroscientists at West Virginia University have developed a wearable, motion-compatible brain positron emission tomography (PET) imager and demonstrated its use in a real-world setting. The device, described in Communications Medicine, could potentially enable previously impossible neuroimaging studies.

“We wanted to create and test a tool that could grant access to imaging the brain – including deep areas – while humans are moving around,” explains senior author Julie Brefczynski-Lewis. “We hope our device could allow the investigation of research questions related to natural upright behaviours, or the study of patients who are normally sedated due to movement issues or challenges in understanding the need to be perfectly still for a scan, which could happen with cognitive impairments or dementias.”

PET scans provide information on neuronal and functional activity by imaging the uptake of radioactive tracers in the brain. But clinical PET systems are extremely sensitive to motion and require supine (lying down) imaging and dedicated scanning rooms. There are neuroimaging techniques that can be used with patients upright and moving – such as functional near-infrared spectroscopy and high-density diffuse optical tomography – but these optical approaches only image the brain surface. Activity in deep brain structures remains unseen.

Julie Brefczynski-Lewis with the prototype AMPET device

To enable upright and motion-tolerant imaging, Brefczynski-Lewis and colleagues designed the AMPET (ambulatory motion-enabling positron emission tomography), a helmet-like device that moves along with the subject’s head. The imager is made from a ring of 12 lightweight detector modules, each comprising arrays of silicon photomultipliers coupled to pixelated scintillation crystal arrays. The imager ring has a 21 cm field-of-view and a central spatial resolution of 2 mm in the tangential direction and 2.8 mm in the radial direction.

Real-world scenarios

The researchers tested the AMPET device on 11 volunteer patients who were scheduled for a clinical PET scan on the same day. The helmet was positioned the on the participant’s head such that it imaged the top of the brain, comprising the primary motor areas. Although it only weighs 3 kg, the team chose to suspend the helmet from above so that participants would not feel any weight while moving their head.

Patients received a low dose (10–20% of their total prescription) of the metabolic PET tracer 18F-FDG. “We chose a very low dose that was within the daily dose for clinical patients,” says Brefczynski-Lewis. “Some applications may require a slightly higher dose for extra sensitivity, but because the detectors are so close to the head, a full clinical-like dose would not likely be necessary.”

Immediately after tracer injection, each participant underwent AMPET imaging for 6 min while they switched between standing still and walking-in-place every 30 s. Following a 5 min transition, subjects were then scanned for another 5 min while alternating between sitting still and lifting their leg while seated. In this second imaging session, the team moved the AMPET lower around the head for five participants, to image deeper brain structures.

Meeting the goals

The team defined three goals to validate the AMPET prototype: motion artefacts of less than 2 mm; differential activation of cortical regions of interest (ROIs) related to leg movement; and differential activation to walking movements in deep brain structures.

The walking versus standing task allowed the researchers to test for any motion of the imager relative to the head. They observed an average movement-related misalignment of just 1.3 mm. Analysis of task-related activity showed the expected brain image patterns during walking, with activity in ROIs that control leg movements significantly greater than in all other imaged ROIs.

In four participants where activity was measured from deep brain structures (the fifth had incorrect helmet placement), the team observed differential activation in various deep lying structures, including the basal nuclei.

The researchers note that one volunteer had a prosthetic right leg. While performing upright walking, his brain patterns showed greater metabolic activity in the area that represented the intact leg. In contrast, no difference in activity between left and right leg ROIs was measured in the other participants.

Brefczynski-Lewis tells Physics World that patients found the AMPET reasonably comfortable and did not feel its weight on their head or neck. Certain movements, however, were slightly inhibited, especially tilting the head towards the shoulders. “Our engineer collaborators recommended a gyroscope mechanism to enable free movement in all directions,” she says.

As well as validating the prototype, the study also identified upgrades required for the AMPET and similar systems. “The great thing about a real-world study on humans was that it showed us which logistics to optimize,” explains Brefczynski-Lewis. “We are developing a system for good placement and monitoring the alignment of the imager relative to the head, as well as widening the coverage to increase sensitivity, and testing a movement task using a bolus-infusion paradigm.”

The post Wearable PET scanner allows brain scans of moving patients appeared first on Physics World.

]]>
Research update A helmet-like upright imaging device has potential to enable previously impossible neuroimaging studies https://physicsworld.com/wp-content/uploads/2024/08/14-08-24-BrainScanner.jpg newsletter1
Goats, sports cars and game shows: the unexpected science behind machine learning and AI https://physicsworld.com/a/goats-sports-cars-and-game-shows-the-unexpected-science-behind-machine-learning-and-ai/ Wed, 14 Aug 2024 10:00:15 +0000 https://physicsworld.com/?p=115804 Matt Hodgson reviews Why Machines Learn by Anil Ananthaswamy

The post Goats, sports cars and game shows: the unexpected science behind machine learning and AI appeared first on Physics World.

]]>
An illuminated brain surrounded by tasks, depicting an AI brain

Artificial intelligence (AI) is rapidly becoming an integral part of our society. In fact, as I write this article, I have access to several large language models, each capable of proofreading and editing my text. While the use of AI can be controversial – who’s to say I really wrote this article? – it may soon be so commonplace that mentioning it will be as redundant as noting the use of a word processor. Just to be clear though, this review is all my own work.

It’s still early days for AI, but it has the potential to impact every aspect of our lives, from politics to education and from healthcare to business. As AI is used more widely, it’s not just computer scientists who need to understand how machines think and come to conclusions. Society as a whole must have a basic appreciation and understanding of how AI works to make informed decisions.

Why Machines Learn, by the award-winning science writer Anil Ananthaswamy, takes the reader on an entertaining journey into the mind of a machine. Ananthaswamy draws inspiration from the subject of his book; much like training a neural network, he uses well-designed examples to build the reader’s understanding step by step.

Whereas AI is a general term that covers human-like qualities such as reasoning and adaptive intelligence, the author’s focus is on the subfield of AI known as “machine learning”, which is all about how we extract knowledge from data. Starting with the fundamentals, Ananthaswamy carefully constructs a comprehensive picture of how machines learn complex relationships.

Anyone who thinks that AI is a modern invention, or that the road to today’s technology has been smooth, will be shocked to learn that the pursuit of a “learning machine” began in the 1940s with McCulloch and Pitts’ model of a biological neuron. Funding for this now billion-dollar industry has sometimes also been meagre. What’s more, the concepts underpinning modern AI have their roots in diverse and unexpected areas, from the idle curiosities of academics to cholera outbreaks and even game shows.

Ananthaswamy’s background in electronics and computer engineering is evident throughout this book, for example in how he introduces several technical and mathematical concepts needed to grasp the power and limitations of machine learning. He begins with the US psychologist Frank Rosenblatt’s development in the late 1950s of the “perceptron” – a basic, single-layer learning algorithm with a binary output (“yes” or “no”). The author then shows how decades of innovation have led to deep neural networks with billions of neurons, such as ChatGPT, capable of giving nuanced and insightful responses.

Unusually for a popular-science book, Why Machines Learn includes quite a lot of mathematics and equations.

Unusually for a popular-science book, Why Machines Learn includes quite a lot of mathematics and equations as it explores how vectors, linear algebra, calculus, optimization theory, statistics and probability can be employed to engineer a synthetic brain. Given the complexity of these topics, Ananthaswamy is careful to provide frequent recaps of key concepts so that readers less familiar with these ideas don’t become lost or overwhelmed.

Fortunately, the author has the uncanny ability to answer questions with simple and illuminating examples just as they arise in the reader’s mind. Although this might seem to diminish some of the magic, it’s inspiring to see how such a powerful system can be constructed from deceptively simple components.

Gaming the system

For topics such as probability, this pedagogical approach is essential for anyone without a strong background in mathematics. For instance, the infamous Monty Hall problem is so counterintuitive that even some of the world’s most renowned mathematicians struggled to accept its solution. Indeed, as Ananthaswamy notes, Paul Erdős – one of the most prolific mathematicians of the 20th century – “reacted as if he’d been stung by a bee” when confronted with the answer.

Illustration of the Monty Hall problem, showing doors containing a car or goats, and what happens when each door is chosen

The dilemma is based on the classic US TV game show Let’s Make a Deal, hosted by Hall, in which you, the contestant, have a choice of three doors. Behind one is a brand-new sports car, while the other two doors conceal goats. After you pick a door (but don’t get to see what’s behind it) the host opens one of the other two doors to reveal a goat. The host then offers you the chance to switch your choice to the remaining, unopened door.

To maximize your chances of winning the car, you should always switch. It’s counter-intuitive and controversial. Surely, if you’ve got two doors to pick from, then the odds of getting the car must be 50–50? In other words, why would you bother switching if there’s a car behind one door and a goat behind the other? The book goes on to explain how the logic behind this decision is a cornerstone of machine learning; bypassing human intuition, which is often flawed.

Ananthaswamy shows that machine learning is a powerful method of analysis. He first demonstrates how data can be represented in an abstract, high-dimensional space, before explaining how collapsing the number of dimensions allows patterns in the data to be found. With the assistance of some elegant linear algebra, this process can be engineered so that the data is categorized most effectively; highlighting strong correlations, which leads to more reliable performance.

Cautious embrace

In our data-driven world, this remarkable capability of AI is something that should be embraced. However, like any data-analysis method, machine learning is prone to biases and errors. Why Machines Learn gives the reader an awareness and understanding of these shortcomings, allowing AI to be used more effectively.

This is particularly important as AI becomes mainstream. It’s common for people to mistakenly believe that it’s some all-powerful super brain, capable of completing any task with ease. This misconception can lead to the misuse of AI or the unfair perception of AI as overrated when it falls short of this unrealistic standard. Ananthaswamy gives his readers an appreciation of how machine learning works and, hence, how to use it appropriately, which may help combat the abuse of AI.

It’s evident that machines are far from achieving human-like intelligence

By exploring the fundamental principles of machine learning in such detail, it’s evident that machines are far from achieving human-like intelligence. While the secrets of the human brain remain elusive, Why Machines Learn demystifies the underlying mechanisms behind machine learning, which may possibly lead to a better understanding of the learning process itself and the development of improved AI.

This inevitably requires the reader to confront advanced and counterintuitive concepts in various branches of mathematics and logic, from collapsing dimensions to mind-bending games of chance. For those who invest the time and effort, they will reap the rewards that come from understanding a technology with the potential to revolutionize many aspects of our lives.

  • 2024 Penguin/Allen Lane 480pp £30hb/£16.99 ebook

The post Goats, sports cars and game shows: the unexpected science behind machine learning and AI appeared first on Physics World.

]]>
Opinion and reviews Matt Hodgson reviews Why Machines Learn by Anil Ananthaswamy https://physicsworld.com/wp-content/uploads/2024/07/2024-08-Hodgson_montyhall_feature.jpg newsletter
Quantum oscillators fall in synch even when classical ones don’t – but at a cost https://physicsworld.com/a/quantum-oscillators-fall-in-synch-even-when-classical-ones-dont-but-at-a-cost/ Wed, 14 Aug 2024 08:57:27 +0000 https://physicsworld.com/?p=116222 For large-scale systems, classical oscillators are more energy-efficient, say theorists

The post Quantum oscillators fall in synch even when classical ones don’t – but at a cost appeared first on Physics World.

]]>
The synchronized flashing of fireflies in summertime evokes feelings of marvel and magic towards nature. How do they do it without a choreographer running the show?

For some physicists, though, these natural fireworks also raise other questions. If the fireflies were quantum, they wonder, would they synchronize their flashing routines faster or slower than their classical counterparts?

Questions of this nature – about how quantum systems synchronize, the energetic costs they pay to do so, and how long it takes them to fall into lockstep – have long bedevilled physicists. Now a team of theorists in the US has begun to come up with answers.  Writing in Physical Review Letters, Maxwell Aifer and Sebastian Deffner of the University of Maryland Baltimore County (UMBC), together with Juzar Thingna of the University of Massachusetts, Lowell (UMass Lowell), present a new take on the energetic cost and time required to synchronize quantum systems. Among other findings, they conclude that quantum systems can synchronize in scenarios where such behaviour would be impossible classically.

How quantum springs synchronize

Studies of synchronization go back to the 1600s, when Christiaan Huygens documented that pendulums placed on a table eventually sway in unison. Huygens called this the “sympathy of pendulums”.

This apparent sympathy between systems – chirping crickets, flashing fireflies, the harmonious firing of pacemaker cells in our hearts – turns out to be ubiquitous in nature. And while it may look like magic, it ultimately stems from information exchanged between individual systems via communication pathways (such as the table in the case of Huygens’ pendulums) available in the shared environment.

“At its core, synchronization is about balance of forces,” Thingna says.

To understand how that balance works, imagine you have a bunch of systems moving in circles of different sizes. The radius of the circle corresponds to the amount of energy in that system.

At first, the systems may all be moving at different paces: some faster, others slower. To synchronize, the circles must interact in such a way that gradually, the radius of all the circles and the pace of the systems becomes the same – meaning that bigger circles must “leak” energy and smaller circles gain it.

But synchronization is impressive only when it is resilient and robust. This means that if there are small disturbances – for example, if one of the systems is kicked out of its circle – the disturbed system should return to the radius and pace of the others.

This picture works for classical systems, but synchronization in the quantum regime is more complex. “The challenge lies in translating the classical concept of synchronization to the quantum world where trajectories are ill-defined concepts due to Heisenberg’s uncertainty principle,” explains Christopher Wächtler, a quantum physicist at the University of California, Berkeley, US who was not involved in this work.

A diagram showing the system of coupled pendulum-like oscillators and a pair of plots showing synchronization in the classical and quantum regimes

Taking inspiration from an experimental setup, the UMBC-UMass Lowell team created a model based on quantum oscillators or springs (a well-known quantum system) that interact with each other via a biased channel – an anti-Hermitian coupling in which one oscillator is favoured more than the other. This biased channel controls the flow of energy in and out of the individual springs. The oscillators also leak information by “talking” to a common thermal environment at a given temperature.

Thanks to this combination of a common thermal environment and a biased inter-system communication channel, the team was able to balance the information flow (that is, the communication between the oscillators and communication with the environment) and synchronize quantum systems in a way similar to how classical systems are synchronized.

The economics of sympathy

This approach is unusual because quantum synchronization research typically explores the quantum systems in their synchronized state after they have been coupled for a long time. In this case, however, the researchers focus on the time before the steady state has been reached, “which in my opinion is an important question to ask,” says Christoph Bruder, a physicist at the University of Basel, Switzerland who was not involved in the study.

To estimate the time it takes to synchronize, the UMBC-UMass Lowell researchers use quantum speed limits, which are a mathematical way of deriving the maximum time it takes a system to go from an initial state to a desired final state. They find that quantum oscillators synchronize the fastest when the conversations between oscillators do not leak – that is, the strength of the interaction between the oscillators outweighs the interaction with the environment.

The team also used ideas from quantum thermodynamics to identify a lower bound on the energetic cost of synchronization. This bound depends on the biased way in which the oscillators talk to each other.

But there is no free lunch.

While synchronizing a small number of quantum systems is energetically more efficient than doing the same for a classical counterpart, the researchers report that this is not scalable. When there are many systems, classical systems are more energy efficient than quantum ones. However, the researchers found that a model system does exhibit quantum synchronization for a wider range of interaction strengths than is the case for classical oscillators, making synchronization in the quantum regime more resilient and robust.

Though the work is still theoretical at this point, Wachtler says that a minimal version of the team’s model could be “effectively implemented in a lab”. The team is keen to explore this further. “For us, this is the first stepping-stone towards this goal of how to make synchronization more practical,” Thingna says.

The post Quantum oscillators fall in synch even when classical ones don’t – but at a cost appeared first on Physics World.

]]>
Research update For large-scale systems, classical oscillators are more energy-efficient, say theorists https://physicsworld.com/wp-content/uploads/2024/08/fireflies-glowing-1172936455-Shutterstock_Fer-Gregory.jpg newsletter1
DUNE prototype detector records its first accelerator-produced neutrinos https://physicsworld.com/a/dune-prototype-detector-records-its-first-accelerator-produced-neutrinos/ Tue, 13 Aug 2024 14:04:06 +0000 https://physicsworld.com/?p=116215 Scientists will use the detector to study the interactions between antineutrinos and argon

The post DUNE prototype detector records its first accelerator-produced neutrinos appeared first on Physics World.

]]>
A prototype argon detector belonging to the Deep Underground Neutrino Experiment (DUNE) in the US has recorded its first accelerator-produced neutrinos. The detector, located at Fermilab near Chicago, was installed in February in the path of a neutrino beamline. After what Fermilab physicist Louise Suter calls a “truly momentous milestone”, the prototype device will now be used to study the interactions between antineutrinos and argon.

DUNE is part of the $1.5bn Long-Baseline Neutrino Facility (LBNF), which is designed to study the properties of neutrinos in unprecedented detail and examine the differences in behaviour between neutrinos and antineutrinos. Construction of LBNF/DUNE began in 2017 at the Sanford Underground Research Facility in South Dakota, which lies some 1300km to the west of Fermilab. When complete, DUNE will measure the neutrinos generated by Fermilab’s accelerator complex.

Earlier this year excavation work was complete on the two huge underground spaces that will be home to DUNE. Lying 1.6km below ground in a former gold mine, the spaces are some 150 m long and seven storeys tall and will house DUNE’s four neutrino detector tanks, each filled with 17 000 tonnes of liquid argon. DUNE will also feature a near-detector complex at Fermilab that will be used to analyze the intense neutrino beam from just 600 m away.

The “2×2 prototype” detector, so-called because it has four modules arranged in a square, record particle tracks with liquid argon time-projection chambers to reconstruct a 3D picture of the neutrino interaction.

“It is fantastic to see this validation of the hard work put into designing, building and installing the detector,” says Suter, who co-ordinated installation of the modules.

It is hoped that the DUNE detectors will become operational by the end of 2028.

The post DUNE prototype detector records its first accelerator-produced neutrinos appeared first on Physics World.

]]>
News Scientists will use the detector to study the interactions between antineutrinos and argon https://physicsworld.com/wp-content/uploads/2024/08/DUNE.jpg
Spot the knot: using AI to untangle the topology of molecules https://physicsworld.com/a/spot-the-knot-using-ai-to-untangle-the-topology-of-molecules/ Tue, 13 Aug 2024 13:00:11 +0000 https://physicsworld.com/?p=115847 Solving a centuries-old mathematical puzzle could hold the key to understanding the function of many of the molecules of life

The post Spot the knot: using AI to untangle the topology of molecules appeared first on Physics World.

]]>
Any good sailor knows that the right choice of knot can mean the difference between life and death. Whether it hoists the sails or secures the anchor, a rope is only as good as the knot that’s tied in it. The same is true, on a much smaller scale, for many of the molecules that keep us alive.

Proteins are essential building blocks for all living things, and these long chains of amino acids form complex 3D shapes that allow molecules to fit together. For a long time, it was thought that while proteins can be highly tangled, they could not form knots under normal conditions, as this would prevent the proteins from being able to fold. But in the 1970s researchers found many topologically knotted proteins, in which their native structures are arranged in the form of an open knot.

As it happens, despite proteins (and even DNA) having “open” curves, knots can still form and affect their function. Indeed, they comprise about 1% of proteins in the Protein Data Bank. Unlike a rope or string, each protein of this type has a characteristic knot (figure 1). The largest group of knotted proteins is the SPOUT family of enzymes (which make up the second largest of seven structurally distinct groups of methyltransferases enzymes), all but one of which are knotted in a “trefoil” of three overlapping rings.

1 Knots for life

Some proteins form well-defined knotted structures, as shown above, where the lower image shows a simplified view of each molecule. The number below each image indicates the number of times the protein crosses itself and the + and – indicate that they are mirror images. The –31 and +31 for example are mirror image instances of the “trefoil” knot. Proteins form “open knots” because their two ends don’t join up. However, it is often still possible to define a knotted structure in the molecule.

This discovery raised many questions, such as how and why these knots form, what is the mechanism of their folding, and what role this might play on a functional level. There is some evidence that knotted proteins are more resistant to extreme temperatures, but scientists still do not know how abundant knots are in molecular structures or exactly how knotting affects their biological function.

The trouble is that when we try to apply what we know about knots to questions in biology and soft matter, we come up against a mathematical problem that’s been confounding scientists for over a century.

A tangled history

The origins of modern knot theory are often traced back to a famous experiment that was performed more than 150 years ago – not with ropes or string, but with smoke.

In 1867 Peter Guthrie Tait invited his friend and fellow physicist William Thomson (later Lord Kelvin) to travel from Glasgow to Edinburgh to witness a demonstration where he generated pairs of smoke rings. To Kelvin’s surprise, these rings were remarkably stable, travelling across the room and even bouncing off each other as if they were made of rubber. A smoke ring is a “vortex ring” in which the aerosols and particulates are rotating in small concentric circles, and this motion gives the ring its stability.

At the time, it was widely believed that the universe was pervaded by a space-filling substance dubbed “aether”, through which gravitational and electromagnetic radiation propagated. Kelvin reasoned that atoms might be made from stable vortices, like smoke rings, in this aether. He further argued that knots tied in aether vortex rings could account for the different chemical elements.

The vortex theory of atoms was incorrect, but knot theory continues to this day as a branch of mathematics

Tait was intrigued by Kelvin’s theory. Over a period of 25 years, and with the help of the Church of England minister Thomas Kirkman, American mathematician Charles Little and James Clerk Maxwell, Tait produced a table of 251 knots with up to 10 crossings (figure 2).  The vortex theory of atoms was incorrect, but knot theory continues to this day as a branch of mathematics.

2 Order and disorder

The first seven orders of knottiness

Peter Guthrie Tait and other early knot theorists spent years compiling a comprehensive list of knots. The above image is extracted from their table of knots up to seven crossings – “the first seven orders of knottiness”.

Spot a knot

For Tait and his fellow theorists, the classification of knots was painstaking work. Every time a new knot was proposed, they had to check that it was unique using drawings and geometric intuition. Tait himself wrote that “though I have grouped together many widely different but equivalent forms, I cannot be absolutely certain that all those groups are essentially different from one another”. Indeed, in 1974 Kenneth Perko showed that two entries in the original table are actually the same knot – these are now known as the “Perko pair” (Proc. Amer. Math. Soc. 45 262).

If you need any more convincing, my student Djordje Mihajlovic has developed an online game called “Spot a Knot” where the goal is to spot equivalent knots from pictures (figure 3). Even after years of researching knots, I often get it wrong. To earn a spot in the table, a knot must have a unique topology, meaning that it cannot be deformed into any other known knot without being broken. As the Perko pair and Mihajlovic’s game show, proving that two knots are different is easier said than done. Remember that topology studies the properties of spaces that do not change if they are deformed smoothly; to a topologist, a mug is equivalent to a doughnut because one can be massaged into the other without losing the inner hole.

3 Brain teaser

Figure 3

To illustrate the difficulty of identifying knots, Djordje Mihajlovic – a PhD student at the University of Edinburgh – developed an online game called “Spot a Knot”. One question is reproduced above. Does the top image correspond to a, b, c, d or e?

As scientists learned more about the structure of the atom, the vortex atom model was gradually abandoned. A final blow came in 1913 when Henry Moseley showed that chemical elements are differentiated not by their topology but by the number of protons in the nucleus.

In knot theory, quantities that describe the properties of knots are called “invariants”. The dream of knot theorists is to find a quantity like the proton number that can classify any knot based on its topology. Such a “complete invariant” would yield a unique value for every unique knot, and wouldn’t change if the knot were smoothly deformed.

A recipe for such a topological invariant could be something like this: “Walk along the knot and label each of the n crossings with numbers 1, 2, 3, …, 2n (you will traverse the knot twice). If the label is even and the line is an overcrossing, then change the sign of the label to minus (figure 4). At the end, each crossing will be labelled by a pair of integers, one even and one odd. The series of even integers is a code for the knot.” This recipe is called the Dowker–Thistlethwaite code, first proposed in 1983 (Topology and its Applications 16 19) (figure 5).

The Dowker–Thistlethwaite code can classify many simple knots, but like every other method that’s been proposed, it isn’t a complete invariant. The first knot invariant was proposed in 1928 by James W Alexander and called the Alexander polynomial. Since then, many others have been developed, but for each one, a case has been found where it fails to make a unique classification.

Taking a walk

The Alexander polynomial belongs to the family of so-called “algebraic invariants”. It is computed by constructing a matrix with as many rows and columns as there are crossings in the knot, and taking its determinant. Algebraic invariants are constructed from a 2D projection of the knot. This is a bit like a shadow, but one where we can discern which part of the loop is on top each time it crosses itself.

Soft-matter physicists like myself, however, want to classify the knots in molecules like proteins and DNA, which are 3D and constantly jostled by thermal energy. Reducing these molecules to 2D projections erases spatial features that may be crucial to their function.

An attractive alternative for characterizing molecules is “geometric invariants”. These are calculated by traversing the knot in 3D and computing some geometric property, such as the curvature, along the route.

One such invariant that I am fond of is the “writhe”, which was introduced by Tait. Writhe can be measured on a 2D projection by counting the “over” and “under” crossings and subtracting one from the other (figure 4b).

4 Over and under

Figure 4

One way to tell the difference between knots is to measure the “writhe”, which quantifies the amount of twisting. (a) Each time the knot crosses itself, the crossing can be characterized as either an overcrossing (left) or an undercrossing (right). The writhe is calculated by subtracting the number of undercrossings from the number of overcrossings.

(b) How the writhe is calculated for two knots – the cinquefoil knot (left), which has a writhe of +5, and the figure-eight knot (right), which has a writhe of 0.

(c) The writhe can also be calculated as a geometric quantity on a 3D molecular knot such as a protein. The geometric writhe can be calculated over the entire knot or as a local quantity between short, adjacent strands. A high value of the “local writhe” indicates that the strands are entangled with each other. Davide Michieletto and colleagues showed that a neural network trained on the local writhe characterizes knot topology with high accuracy.

However, writhe can also be computed as a geometric quantity. Imagine walking along a 3D knot, such as a protein, and at each step writing down an estimate of the writhe by counting the crossings you can see. At the end of your journey, the average of these numbers will yield the true value of the writhe. Unfortunately, writhe isn’t a complete invariant. In fact, like its algebraic counterparts, no geometric invariant has ever been proved to uniquely classify all knots.

In 2021 Google DeepMind’s AlphaFold artificial-intelligence programme solved a problem that had been evading scientists for decades – how to predict a protein’s structure from its amino acid sequence (Nature 596 583). The function of proteins depends on their 3D structure, so AlphaFold is a powerful tool for drug discovery and the study of disease.

The question we asked ourselves was: could AI do the same for the knot invariant problem? 

Wriggle and writhe

Using AI to classify knots has been explored by previous researchers, most recently by Olafs Vandans and colleagues of the City University of Hong Kong in 2020 (Phys. Rev. E 101 022502) and Anna Braghetto of the University of Padova and team in 2023 (Macromolecules 56 2899). In those studies, they treated the different knots like strings of beads and trained a neural network to identify them by giving it the Cartesian coordinates and, in the latter case, the vector, distance and angles between the beads.

5 Encoding knots

Figure 5

The Dowker–Thistlethwaite notation is a knot-invariant first proposed in 1983. This method assigns a sequence of integers to a knot by traversing it twice and assigning a number to each crossing, as shown in the image. The final sequence characterizes the knot.

 

 

 

 

These researchers achieved high accuracies, but only for the five simplest knots. We wanted to extend this to much more complicated topologies, while also simplifying the neural network architecture and using a smaller training dataset.

To do this we took inspiration from nature. In our bodies, knots in DNA are untangled by specialist enzymes called topoisomerases. These enzymes cut and reattach DNA strands and they can effectively smooth out knots despite being about a thousand times smaller than a DNA molecule.

We hypothesized that the topoisomerases can sense some local geometric property that allows them to locate the most tightly knotted part of the DNA molecule. We tried to do this ourselves using various quantities including the density and the curvature. In the end our results led back to the beginning – to Tait and his geometric writhe.

We decided that giving our AI the local writhe would give it the best chance to successfully identify complex knots

As well as calculating writhe over an entire knot, we can also measure it as a local quantity that tells us how much segment x is entangled with nearby segment y (figure 4c). We found that local writhe is a remarkably effective way to locate knotted segments in long, looping molecules (ACS Polymers Au 2 341). Based on this result, we decided that giving our AI the local writhe would give it the best chance to successfully identify complex knots.

Armed with our theory, we began building a neural network to test it. To start, we generated a training dataset by simulating the thermal motion of the five simplest knots, extracting tens of thousands of conformations (figure 6a).

We then trained two neural networks: one using the Cartesian coordinates of the knots and one using the local writhe. In each case, we supervised the AI, and used a subset of our training dataset to tell the neural networks what type each of the knots was. To test our method we asked the neural networks to classify conformations of these simple knots that they hadn’t seen before.

When the AI was trained on the Cartesian coordinates on a simple neural network, it made a correct categorization only four times out of five, similar to what Vandans and Bragetto found. This is probably better than the score most of us would get in the Spot a Knot game, but it’s still far from perfect.

However, when the neural network was trained on the local writhe, the difference was staggering: it could correctly classify the knots with more than 99.9% accuracy.

Tougher challenges

Though I was surprised by this result, the identification of the five simplest knots is relatively trivial, and can be achieved using existing invariants (or an extremely eagle-eyed Spot a Knot player).

We decided to give the neural network a much trickier challenge. This time it would only have to classify three knots rather than five, but we had chosen them carefully: the Conway knot, the Kinoshita–Terasaka (KT) knot and the unknot – the simplest of all knots. The first two have 11 crossings, and are “mutants” of each other because they are identical except in one region where the knot is “flipped”. They share many knot invariants, and they also share some invariants with the unknot.

6 Spot the difference

Figure 6

A complete knot invariant shouldn’t change when a knot is smoothly deformed, but should return a different result for topologically distinct structures. Do the two pictures in a show the same knot? It’s often difficult for human intuition to tell knots apart. In fact, the two pictures show two slightly different structures – the Conway and Kinoshita–Teresaka knots. Because it’s difficult to tell them apart, these two knots can be used to test a knot-characterization neural network.

The images in b show different configurations of two knots – the 51, or cinquefoil knot (above) and the 72 knot (below). In Davide Michieletto and colleagues’ work on neural networks, the cinquefoil was part of the first training dataset and the 72 was included in the larger dataset.

What we discovered is that the Conway and KT knots were indistinguishable for a neural network trained on Cartesian coordinates but they could be identified 99.9% of the time by the neural network trained on the local writhe.

The final test was to apply this training to a much larger pool of knots. We ran simulations of 250 types of knots, with up to 10 crossings (figure 6b). When the neural network was trained with the Cartesian coordinates it made a correct classification only one time out of five. By contrast, our best local-writhe-trained neural network could classify all 250 knots in a matter of seconds with 95% accuracy, much better than any other algorithm or single topological invariant (Soft Matter 20 71).

A final twist

Without knowing anything about knots or knot theory, our neural network had taught itself to do something that has long evaded human intuition. In fact, we are still working to open the “black box” and understand what exactly it discovered.

We have found that to distinguish the five simplest knots, the neural network takes every set of pairs of points on the knot and multiplies the writhe at the two points together. What’s intriguing is that this quantity is equivalent to an existing invariant called the “Vassiliev invariant of order two”.

Vassiliev invariants are computed by multiplying pairs, triplets, quadruplets, up to n-tuples of the local writhe matrix. Incidentally, the Vassiliev invariant of order 2 is also the coefficient of the quadratic term of the Conway polynomial, the algebraic invariant we saw earlier. It’s been proposed, though never proved, that the complete set of Vassiliev invariants, which can be computed as an integral, is the long-searched-for complete invariant.

We were excited to find that as it’s presented with more complex knots, the neural network adapts by computing Vassiliev invariants of higher order

We were therefore excited to find that as it’s presented with more complex knots, the neural network adapts by computing Vassiliev invariants of higher order. For instance, to uniquely classify the first five knots, the neural network requires only the degree two Vassiliev invariant. But for the 250-knot dataset, it may compute the Vassiliev invariants up to order three or four.

Geometric and algebraic invariants are computed using very different mathematics, so it’s exciting that AI can discover connections between them, and this brings us a step closer to discovering a complete invariant.

Knotting else matters

In only three years, AlphaFold has generated millions of proteins, most of which have yet to be fully studied. In 2023 a group led by Joanna Sulkowska of the University of Warsaw predicted that up to 2% of human proteins generated by AlphaFold are knotted, with the most complex knot found having six crossings (Protein Sci. 32 e4631). The year before, Peter Virnau of the Johannes Gutenberg University Mainz discovered a protein knot with seven crossings in the AlphaFold2 dataset (Protein Sci. 31 e4380). This protein has never been observed experimentally, so it’s possible that even more complex knots are out there.

Knots don’t crop up only in biology; knotted topologies have also been found to influence the thermodynamic and material properties of ice and hydrogels; meaning that in the future, we may use topology to design new materials. We need powerful methods to identify the structural fingerprints of knots in molecules and materials and we hope that our findings will inform this search. Knotting really does matter.

In 2004 three researchers in Canada used their university’s computing cluster to extend the table of knots, first compiled by Tait, up to 19 crossings, identifying more than six billion unique structures (Journal of Knot Theory and Its Ramifications 13 57). Having taken 25 years to create his list, Tait would probably have been shocked to learn that a century later, a machine would be able to extend his work by more than five orders of magnitude, in just a few days.

The biggest outstanding challenge in knot theory remains the search for the elusive complete invariant. Now that we are enabled by AI, the next step forward might take us equally by surprise.

The post Spot the knot: using AI to untangle the topology of molecules appeared first on Physics World.

]]>
Feature Solving a centuries-old mathematical puzzle could hold the key to understanding the function of many of the molecules of life https://physicsworld.com/wp-content/uploads/2024/08/24-08-Michieletto-knot-COLOUR-shutterstock_527071531.png newsletter
CERN at 70: how the Higgs hunt elevated particle physics to Hollywood status https://physicsworld.com/a/cern-at-70-how-the-higgs-hunt-elevated-particle-physics-to-hollywood-status/ Tue, 13 Aug 2024 13:00:00 +0000 https://physicsworld.com/?p=115949 Peering behind the comms curtain at the world's most famous particle physics lab

The post CERN at 70: how the Higgs hunt elevated particle physics to Hollywood status appeared first on Physics World.

]]>
When former physicist James Gillies sat down for dinner in 2009 with actors Tom Hanks and Ayelet Zurer, joined by legendary director Ron Howard, he could scarcely believe the turn of events. Gillies was the head of communications at CERN, and the Hollywood trio were in town for the launch of Angels & Demons – the blockbuster film partly set at CERN with antimatter central to its plot, based on the Dan Brown novel.

With CERN turning 70 this year, Gillies joins the Physics World Stories podcast to reflect on how his team handled unprecedented global interest in the Large Hadron Collider (LHC) and the hunt for the Higgs boson. Alongside the highs, the CERN comms team also had to deal with the lows. Not least, the electrical fault that put the LHC out of action for 18 months shortly after its switch-on. Or figuring out a way to engage with the conspiracy theory that particle collisions in the LHC would somehow destroy the Earth.

Spoiler alert: the planet survived. And the Higgs boson discovery was announced in that famous 2012 seminar, which saw tears drop from the eyes of Peter Higgs – the British theorist who had predicted the particle in 1964. Our other guest on the podcast, Achintya Rao, describes how excitement among CERN scientists became increasingly palpable in the days leading to the announcement. Rao was working in the comms team within CMS, one of the two LHC detectors searching independently for the Higgs.

Could particle physics ever capture the public imagination in the same way again?

Discover more by reading the feature “Angels & Demons, Tom Hanks and Peter Higgs: how CERN sold its story to the world” by James Gillies.

The post CERN at 70: how the Higgs hunt elevated particle physics to Hollywood status appeared first on Physics World.

]]>
Peering behind the comms curtain at the world's most famous particle physics lab Peering behind the comms curtain at the world's most famous particle physics lab Physics World CERN at 70: how the Higgs hunt elevated particle physics to Hollywood status full false 59:27 Podcasts Peering behind the comms curtain at the world's most famous particle physics lab https://physicsworld.com/wp-content/uploads/2024/08/Angels-and-Demons_Home-scaled.jpg newsletter
Photonic orbitals shape up https://physicsworld.com/a/photonic-orbitals-shape-up/ Tue, 13 Aug 2024 09:42:30 +0000 https://physicsworld.com/?p=115988 The behaviour of photons confined inside three-dimensional cavity superlattices is much more complex than that of electrons in conventional solid-state materials

The post Photonic orbitals shape up appeared first on Physics World.

]]>
Photons in arrays of nanometre-sized structures exhibit more complex behaviour than electrons in conventional solid-state materials. Though the two systems are sometimes treated as analogous, scientists at the University of Twente in the Netherlands discovered variations in the shape of the photons’ orbitals. These variations, they say, could be exploited when designing advanced optical devices for quantum circuits and nanosensors.

In solid-state materials, electrons are largely confined to regions of space around atomic nuclei known as orbitals. Additional electrons stack up in these orbitals in bands of increasing energy, and the scientists expected to find similar behaviour in photons.  “It has been known for some time that photonic materials are similar to standard electronic matter in many ways and can be described using energy bands and orbitals, too,” says Marek Kozon, a theorist and mathematician who participated in the study as part of his PhD in the Complex Photonic Systems (COPS) lab at Twente.

“Similar” does not mean “same”, however. “We have now discovered that orbitals in which photons are confined are significantly more varied in shape than electronic orbitals,” Kozon says. This is important, he says, because the shape of electronic orbitals influences materials’ chemical properties – something that is apparent in the Periodic Table of the Elements, which groups elements with similar orbital structures together. Additional variations in the shape of photonic orbitals could also create properties not achievable in electronic materials.

Boring electrons, exciting photons

The comparatively “boring” behaviour of electrons stems from the fact that they always orbit the nucleus in regions with sphere-like shapes, explains Kozon, who is now at the single-photon detector company Pixel Photonics in Germany. Photonic materials, in contrast, can be designed with much more freedom.

In the latest work, the Twente researchers used numerical computations to study how photons behave when they are confined in a three-dimensional nanostructure known as an inverse woodpile superlattice. This superlattice is a photonic crystal that contains periodic defects with a radius that differs from that of the pores in the underlying structure. The researchers adopted this design for two reasons, Kozon explains. The first is that photonic states inside the defects are insulated from their environment, making them easier to study. The second is that 3D inverse woodpile superlattices are relevant to experiments being carried out by colleagues in the COPS lab.

The team’s original motivation, Kozon continues, was to better understand how light is confined in these structures. “The study turned out be significantly more complicated than we expected,” he says. “We produced several terabytes of data and developed new analysis methods, including scaling and machine learning, to evaluate the sheer amount the information we had gathered. We then investigated in more detail the superlattice parameters that the analysis flagged up as the most interesting.”

Applying the scaling techniques, for example, created an unexpected issue. While scaling theories usually work well for very large systems, which in this case would mean very large periodicities (or lattice constant), Kozon notes that “our system is precisely the opposite because it has a small periodicity. We were thus not able to calculate how light behaves in it.”

Optimally confining light

The team solved this problem by developing a unique clustering method that uses unsupervised machine learning to analyse the data. Thanks to these analyses, the researchers now know which types of structures can optimally confine light in an inverse woodpile superlattice. Conversely, they can identify any deviations from these ideal structures by comparing experimental observations with their – now vast – database.

And that is not all: the team also analysed where energy is concentrated in the photonic crystal, making it possible to determine which parameters allow the greatest concentration of energy in a small volume of the structure. “This is extremely important for so-called cavity-quantum-electrodynamics (QED) applications in which we force light to interact with matter and, for example, to control the emission of light sources or even create exotic states of mixed light and matter,” Kozon tells Physics World. “This finding could help advance applications in efficient lighting, quantum computing or sensitive photonic sensors.”

The Twente researchers are now fabricating real 3D superlattices thanks to the knowledge they have gained. They report their present work in Physical Review B.

The post Photonic orbitals shape up appeared first on Physics World.

]]>
Research update The behaviour of photons confined inside three-dimensional cavity superlattices is much more complex than that of electrons in conventional solid-state materials https://physicsworld.com/wp-content/uploads/2024/08/vxdanmmuuevytjjkvo6guw.jpg
Liquid water could abound in Martian crust, seismic study suggests https://physicsworld.com/a/liquid-water-could-abound-in-martian-crust-seismic-study-suggests/ Mon, 12 Aug 2024 19:00:52 +0000 https://physicsworld.com/?p=115986 Reservoir could harbour microbial life

The post Liquid water could abound in Martian crust, seismic study suggests appeared first on Physics World.

]]>
An ocean’s worth of liquid water could be trapped within the cracks of fractured igneous rocks deep within the Martian crust – according to a trio of researchers in the US. They have analysed seismic data gathered by NASA’s InSight Lander and their results could explain the fate of some of the liquid water that is believed to have existed on the Martian surface in the distant past.

Mars’ surface carries many traces of its watery past including remnants of river channels, deltas, and lake deposits. As a result, scientists are confident that lakes, rivers, and oceans of liquid water were once common on the Red Planet in the distant past.

Evidence also suggests that about 3–4 billion years ago, Mars’ atmosphere was gradually lost to space, and its surface dried up. While some water remains locked away in Martian ice caps, most of it would have either escaped into space with the rest of the atmosphere, or filtered down into porous rocks in the crust, where it could remain to this day. So far, scientists are uncertain as to how much of this water is held within the crust, and how deeply it could be sequestered.

Seismic insight

This latest research was done by Michael Manga at the University of California Berkeley along with Vashan Wright and Matthias Morzfeld at the University of California San Diego. The trio searched for buried water by analysing data collected by the InSight Lander, which probed the Martian interior in 2018–2022. To gather information about the planet’s crust, InSight’s SEIS instrument detected the seismic waves reverberate throughout the planet, originating from sources including Marsquakes and meteor impacts.

As they travel through the Martian interior, these waves change speed and direction at boundaries between different materials in the crust. This means that when measured by SEIS, seismic waves originating from the same source can be detected at different times, depending on the paths they took to reach the probe.

“The speed at which seismic waves travel through rocks of different densities depend on their composition, pore space, and what fills the pore space – either gas, water, or ice,” Manga explains. By analysing the differing arrival times of seismic waves reaching the probe from the same sources, researchers can gather useful information about the composition of the planet’s interior.

To interpret InSight’s seismic data, Manga and colleagues combined its measurements with the latest rock physics models and probabilistic analysis. They were able to identify the combinations of rock composition, water saturation, porosity, and pore shape within the Martian crust that could best explain InSight’s measurements.

Large reservoir

“We identified a large reservoir of liquid water,” Manga describes. “The observations on Mars are best explained by having cracks in the mid-crust that are filled with liquid water.”

The researchers reckon that this reservoir is sequestered between about 11.5–20 km beneath the surface and contains enough water to cover the Martian surface in a liquid ocean between 1–2 km deep. This section of the crust is believed to comprise fractured igneous rock, formed through the cooling and solidification of magma.

The team hopes that their results could provide fresh insights into the fate of the liquid water that once dominated Mars’ surface. “Understanding the water cycle and how much water is present is critical for understanding the evolution of Mars’ climate, surface, and interior,” Manga says.

The team’s discoveries could help identify potentially habitable environments hidden deep within the Martian crust where microbial communities could thrive today, or in the past.

“On Earth, we see life deep underground,” Manga explains. “This does not necessarily mean there is also life on Mars, but at least there are environments that could possibly be habitable.”

The research is described in PNAS.

The post Liquid water could abound in Martian crust, seismic study suggests appeared first on Physics World.

]]>
Research update Reservoir could harbour microbial life https://physicsworld.com/wp-content/uploads/2024/08/12-8-24-InSight-Lander.jpg newsletter1
Had a leak from your science facility? Here’s how to deal with the problem https://physicsworld.com/a/had-a-leak-from-your-science-facility-heres-how-to-deal-with-the-problem/ Mon, 12 Aug 2024 10:32:45 +0000 https://physicsworld.com/?p=115794 Robert P Crease explains how Fermilab navigated an accidental leak of tritium

The post Had a leak from your science facility? Here’s how to deal with the problem appeared first on Physics World.

]]>
Small leaks of radioactive material can be the death knell for large scientific facilities. It’s happened twice already. Following releases of non-hazardous amounts of tritium, the Brookhaven National Laboratory (BNL) was forced to shut its High Flux Beam Reactor (HFBR) in 1997, while the Lawrence Berkeley National Laboratory (LBNL) had to close its National Tritium Labeling Facility in 2001.

Fortunately, things don’t always turn out badly. Consider the Fermi National Accelerator Laboratory (Fermilab) near Chicago, which has for many decades been America’s premier high-energy physics research facility. In 2005, an experiment there also leaked tritium, but the way the lab handled the situation meant that nothing had to close. Thanks to a grant from the National Science Foundation, I’ve been trying to find out why such successes happen.

Running on grace

Fermilab, which opened in 1971, has had a hugely successful history. But its relationship with the local community got off to a shaky start. In 1967, to acquire land for the lab, the State of Illinois used a US legal manoeuvre called “eminent domain” to displace homeowners, angering neighbours. More trouble came in 1988, when the US Department of Energy (DOE) considered Fermilab as a possible site for the 87 km circumference Superconducting Supercollider (SSC), which would require acquiring more land.

Some locals formed a protest group called CATCH (Citizens Against The Collider Here). It was an aggressive organization whose members accused Illinois officials of being “secretive, arrogant, and insensitive”, and of wanting to saddle the area with radiation, traffic and lower property values. While Illinois officials were making the bid to host the SSC, the lab was the focus of protests. The controversy ended when the DOE chose to site the machine in Waxahachie, Texas. (The SSC was cancelled in 1993, incomplete.)

Aware of the local anger, Fermilab decided to revamp its public relations. In 1989, it replaced its Office of Public Information with a “Department of Public Affairs” reporting to the lab director. Judy Jackson, who became the department’s head, sought professional consultants, and organized a diverse group of  community members with different backgrounds, including a CATCH founder, to examine Fermilab’s community engagement practices.

Brookhaven’s closure of the HFBR in 1997 was a wake-up call for US labs, including Fermilab itself. Aware that the reactor had been shut by a cocktail of politics, activism and media scare stories, the DOE organized a “Lessons learned” conference in Gaithersburg, Maryland, a year later. When Jackson came to the podium her first slide read simply: “Brookhaven’s experience: There but for the grace of God…”

Then, in 2005, Fermilab discovered that one of its own experiments leaked tritium.

Tritium tale

All accelerators produce tritium in particle collisions at target areas or beam dumps. Much dissipates in air, though some replaces ordinary hydrogen atoms to make tritiated water, which is hard to control. Geographically, Fermilab is fortunate, being located over almost impermeable clay. Compacted and thick, the clay’s a nuisance for gardeners and construction crews but a godsend to Fermilab, for bathtub-like structures built in it easily contain the tritium.

The target area of one experimental site – Neutrinos at the Main Injector (NuMI) – was dug in bedrock beneath the clay. Then, during routine environmental monitoring in November 2005, Fermilab staff found a (barely) measurable amount of tritium in a creek that flowed offsite. Tritium from NuMI was mixing with unexpectedly high amounts of water vapour seeping through the bedrock, creating tritiated water that went into a sump. This was being pumped out and making its way into surface water.

The idea was that employees, neighbours, the media, local officials and groups would all be informed simultaneously, so that everybody would hear the news first from Fermilab rather than other sources.

Jackson’s department drew up a plan that would see letters delivered by hand to community members from lab director Pier Oddone, who would also pen an article in the Friday 9 December edition of the daily online newspaper Fermilab Today. The idea was that employees, neighbours, the media, local officials and groups would all be informed simultaneously, so that everybody would first hear the news from Fermilab rather than other sources.

Disaster struck when a sudden snowstorm threatened to delay the letters from reaching recipients. But the lab sent staff out anyway, knowing that local residents simply had to hear of the plan before that issue of Fermilab Today. When published, it appeared as normal, with a story about a “Toys for Tots” Christmas collection, a list of lab events and the cafeteria menu (including roasted-veggie panini).

Oddone’s “Director’s corner” column was in its usual spot on the right, but attentive readers would have noticed that it had appeared a few days early (it normally came out on a Tuesday). As well as mentioning the letter that had been hand-delivered to the community, Oddone said that there had been “a small tritium release” as a result of “normal accelerator operations”, but that it was “well within federal drinking water standards”.

His column provided a link to a webpage for more information and Jackson’s phone number in her department. That web page also listed Jackson’s office phone number, and said it would link to any subsequent media coverage of the episode. Oddone’s message seemed to be appropriate publicity about a finding that was not a health or environment hazard; it was a communication essentially saying: “Here’s something that’s happening at Fermilab.”

Fermilab family fair

For years Jackson marvelled at how smoothly everything turned out. Politicians were supportive, the media fair and community members were largely appreciative of the extent to which Fermilab had gone to keep them informed. “Don’t try this at home,” she’d tell people, meaning don’t try to muddle through without having a plan drawn up with the help of a consultant. “If you do it wrong, it’s worse than not doing it at all.”

The critical point

Fermilab’s successful navigation of the unexpected tritium emission cannot be traced to any one factor. But two lessons stand out from the 10 or so other episodes I’ve found around that time when major research instruments leaked tritium. One is the importance of having a strong community group that wasn’t just a token effort but a serious exercise that involved local activists. The group discouraged activist sharpshooting and political posturing, thereby allowing genuine dialogue about issues of concern.

A second lesson is what I call “quantum of response”, by which I mean that the size of one’s response must be appropriate to the threat rather than over- or underplaying it. Back in the late 1990s, the DOE had responded to the Brookhaven leak with dramatic measures – press conferences were held, statements issued and, incredibly, the lab’s contractor was fired. Instead of reassuring community members, those actions terrified many.

It’s insane to fire a contractor that had been successful for half a century because of something that posed no threat to health or the environment. All it did was suggest that something far worse was happening that the DOE wasn’t talking about. One Brookhaven activist called the leak a “canary” presaging the lab’s admission of more environmental catastrophes.

The Fermilab lesson is two decades old now. The onset of social media since then makes it easy to form and consolidate terrified people by promoting and amplifying inflammatory messages, which will be harder to address.  Moreover, tritium leaks are only one kind of episode that can spark community concerns at research laboratories.

Sometimes accelerator beams have gone awry, or experimental stations have malfunctioned in a way that releases radiation. Activists have accused accelerators at Brookhaven and CERN of possibly creating strangelets or black holes that might destroy the world. Fermilab’s current woes stemming from its recent Performance Evaluation and Measurement Plan may raise yet another set of community relations issues.

Whatever the calamity, a lab’s response should not be improvised but based on a carefully worked-out plan. In the 21st century, “God’s grace” may be a weak force. Studying previous episodes, and seeking lessons to be learned from them, is a stronger one.

The post Had a leak from your science facility? Here’s how to deal with the problem appeared first on Physics World.

]]>
Opinion and reviews Robert P Crease explains how Fermilab navigated an accidental leak of tritium https://physicsworld.com/wp-content/uploads/2024/08/2024-08-CP-Damage-control-iStock-1772511520_Paper-Trident.jpg newsletter
Our world (still) cannot be anything but quantum, say physicists https://physicsworld.com/a/our-world-still-cannot-be-anything-but-quantum-say-physicists/ Mon, 12 Aug 2024 08:26:12 +0000 https://physicsworld.com/?p=115981 Measurements of the Leggett-Garg inequality using neutron interferometry emphasize that no classical macroscopic theory can describe reality

The post Our world (still) cannot be anything but quantum, say physicists appeared first on Physics World.

]]>
Is the behaviour of quantum objects described by a simple, classical theory? Or can particles really be in a superposition of different places at once, as quantum theory suggests? In 1985, the physicists Antony James Leggett and Anupam Garg proposed a new way of answering these questions. If the world can be described by a theory that doesn’t feature superposition and other quantum phenomena, Leggett and Garg showed that a certain inequality must be obeyed. If the world really is quantum, though, the inequality will be violated.

Researchers at TU Wien in Austria have now made a new measurement of this so-called Leggett-Garg inequality (LGI) using neutron interferometry. Their verdict is clear: no classical macroscopic theory can truly describe reality. The work also provides further proof that a particle can be in a superposition of two states associated with different locations – even when these locations are centimetres apart.

Correlation strengths

The LGI is conceptually similar to the better-known Bell’s inequality, which describes how the behaviour of one object relates to that of another object with which it is entangled. The LGI, however, describes how the state of a single object varies at different points in time.

Leggett and Garg assumed that the object in question can be measured at different moments. Each of these measurements must yield one of two possible results. It is then possible to perform a statistical analysis of how strongly the results at the different moments correlate with each other, even without knowing how the object’s actual state changes over time.

If the theory of classical realism holds, Leggett and Garg showed that the degree of these correlations cannot exceed a certain level. Specifically, for a set of three measurements, the quantity KC21 + C32C31 (where C is a correlation function, and the indices denote the different measurements) must be less than 1. If, on the other hand, the object obeys the rules of quantum theory, K will be greater than 1.

Enter neutron beams

Previous experiments have already demonstrated LGI violations in several quantum systems, including photonic qubits, nuclear spins in diamond defect centres, superconducting qubits and impurities in silicon. Still, team member Hartmut Lemmel says the new measurement offers certain advantages.

“Neutron beams, as we use them in a neutron interferometer, are perfect,” says Lemmel, who oversees the S18 instrument at the Institut Laue-Langevin (ILL) in Grenoble, France, where the experiment was carried out. A neutron interferometer, he explains, is a silicon-based crystal interferometer in which an incident neutron beam is split into two partial beams at a crystal plate and then recombined by another piece of silicon. This configuration means there are three distinct regions in which the neutrons’ locations can be measured: in front, inside and behind the interferometer.

“The actual measurement of the two-level system’s state probes the presence of the neutron in two particular regions of the interferometer, which is usually referred to as a ‘which-way’ measurement,” explains team member Stephan Sponar, a postdoctoral researcher at TU Wien. “So as not to disturb the time evolution of the system, our measurement probes the absence rather than the presence of the neutron in the interferometer. This is called an ideal negative measurement.”

The fact that the two partial beams are several centimetres apart is also beneficial, adds Niels Geerits, a PhD student in the team. “In a sense, we are dealing with a quantum object that is huge by quantum standards,” he says.

Leggett-Garg inequality is violated

After combining several neutron measurements, the TU Wien team showed that the LGI is indeed violated, with the final measured value of the Leggett–Garg correlator K equal to 1.120 ± 0.026.

“Our obtained result cannot be explained within the framework of macro-realistic theories, only by quantum theory,” Sponar tells Physics World. One consequence, Sponar continues, is that the idea that “maybe the neutron is only travelling on one of the two paths, we just don’t know which one” cannot be true. There is, he says, “no time inside the interferometer [when] the system (neutron) is in a ‘given state’, that is, either in path 1 or in path 2”.

Instead, he concludes, the neutron must be in a coherent superposition of system states – a fundamental property of quantum mechanics.

The experiment is detailed in Physical Review Letters.

The post Our world (still) cannot be anything but quantum, say physicists appeared first on Physics World.

]]>
Research update Measurements of the Leggett-Garg inequality using neutron interferometry emphasize that no classical macroscopic theory can describe reality https://physicsworld.com/wp-content/uploads/2024/08/neutrons-on-classicall.jpg newsletter1
CERN’s Science Gateway picked by Time magazine as one of the ‘world’s greatest places’ to visit https://physicsworld.com/a/cerns-science-gateway-picked-by-time-magazine-as-one-of-the-worlds-greatest-places-to-visit/ Sat, 10 Aug 2024 09:00:42 +0000 https://physicsworld.com/?p=115969 The gateway ‘bridges the gap between the general public and the people in lab coats’

The post CERN’s Science Gateway picked by <em>Time</em> magazine as one of the ‘world’s greatest places’ to visit appeared first on Physics World.

]]>
As well as a high-end hotel on the Amalfi coast, a wildlife lodge in Guyana, and a “bamboo sanctuary” in Indonesia, Time magazine’s slightly pretentious list of the “world’s greatest places” for 2024 includes one destination physicists might actually want to visit.

We’re talking about CERN’s “Science Gateway”, an outreach centre designed by the “master of hi-tech architecture” Renzo Piano, which features a transparent skywalk between two raised tubular buildings.

Time calls the gateway a “family-friendly, admission-free offshoot” of CERN that “bridges the gap between the general public and the people in lab coats”.

The Science Gateway took some three years to build and opened in October 2023. It includes exhibitions, labs, a 900-seat auditorium as well as a shop and a Big Bang café. Aimed at those aged five and above, the centre is expected to welcome half a million visitors each year.

To compile the list, Time selected from nominations made via an application process as well as suggestions from its international network of correspondents and contributors.

Other destinations on the list include Antarctica’s White DesertMaui Cultural Lands in Hawaii, and Kamba in Republic of the Congo.

So, what are you waiting for? Book that trip to Geneva.

The post CERN’s Science Gateway picked by <em>Time</em> magazine as one of the ‘world’s greatest places’ to visit appeared first on Physics World.

]]>
Blog The gateway ‘bridges the gap between the general public and the people in lab coats’ https://physicsworld.com/wp-content/uploads/2024/08/030A9659.jpeg
Pumping on a half-pipe: physicists model a skateboarding skill https://physicsworld.com/a/pumping-on-a-half-pipe-physicists-model-a-skateboarding-skill/ Fri, 09 Aug 2024 17:02:49 +0000 https://physicsworld.com/?p=115977 Variable pendulum describes how energy is pumped into the system

The post Pumping on a half-pipe: physicists model a skateboarding skill appeared first on Physics World.

]]>
If you have been watching skateboarding at the Olympics, you may be wondering how the skaters manage to keep going up and down ramps long after friction should have consumed their initial gravitational potential energy.

That process is called pumping, and most skaters will learn how to do it by going back and forth on a half-pipe. If you are not familiar with the lingo, a half-pipe comprises two ramps that are connected by a lower (sometimes flat) middle section. A good skateboarder can skate up the side of a ramp, turn around and do the same on the other side – and continue to oscillate back and forth in the half-pipe.

What’s obvious about the physics of this scenario is that the gravitational potential energy of the skater while at the top of the half-pipe will be quickly lost to friction. So how does a skater keep going? How do they pump kinetic energy into the system?

Variable pendulum

It turns out that the process is similar to an obscure way that you can keep a playground swing going – by standing on the seat and shifting your centre of mass by squatting down in the centre of a swing and rising up at both ends of a swing (see video below). This can be understood in terms of a pendulum with a length that varies in a regular way – and that is how Florian Kogelbauer at ETH Zurich and colleagues in Japan have modelled pumping in a skateboard half-pipe.

Their model considers how a skilled skater modulates their centre of mass relative to the surface of the half-pipe. Essentially this involves crouching down as the skateboard travels across the flat bit of the halfpipe, then pushing up from the board during the curved ascent of the ramp. Pushing up reduces the moment of inertia of the system, and conservation of angular momentum dictates that the skater must speed up.

The team compared their model to video data of experienced and inexperienced skaters pumping a half-pipe. They found that experienced skaters did indeed adhere to their model of pumping. They now plan to extend their model to include other movements done by skaters during pumping. They also say that their model could be used to better understand the physics of other sports such as ski jumping.

The research is described in Physical Review Research.

And if you are interested in the physics of the playground swing, check out the video below.

 

The post Pumping on a half-pipe: physicists model a skateboarding skill appeared first on Physics World.

]]>
Blog Variable pendulum describes how energy is pumped into the system https://physicsworld.com/wp-content/uploads/2024/08/9-8-24-skateboard-pumping.jpg
Peering inside the biological nano-universe: Barbora Špačková on unveiling individual molecules moving in real time https://physicsworld.com/a/peering-inside-the-biological-nano-universe-barbora-spackova-on-unveiling-individual-molecules-moving-in-real-time/ Fri, 09 Aug 2024 13:30:55 +0000 https://physicsworld.com/?p=115768 Barbora Špačková on moving from theoretical to experimental physics and the joy of refining her technology for real-world applications

The post Peering inside the biological nano-universe: Barbora Špačková on unveiling individual molecules moving in real time appeared first on Physics World.

]]>
On 10 April 2019, physicist Barbora Špačková was peering through an optical microscope in her lab at Chalmers University of Technology in Sweden when she saw a short DNA segment – a biological object so small it was thought nearly unimaginable for conventional microscopy to reveal it. She remembers the exact date, because, in a poetic coincidence, it was the day that the first image of a black hole was released.

While everyone on campus was talking about the iconic astronomical image, Špačková was in the lab witnessing her first successful experiment on the path to a new microscopy technique that today enables other scientists to see molecules just a nanometre in size. “That was a perfect day,” she recalls. “Science is often about 95% failure, troubleshooting and wondering why experiments don’t go as planned. So having that moment of success was beautiful.”

But her path to that moment had not been linear. As an undergraduate studying physics at the Czech Technical University in Prague, Špačková took a two-year break from her studies, during which she worked as a freelance graphic designer. “It was a period when I was not exactly sure what to do with my life. But one night, I woke up with a clear realization that my heart was in science,” she says. “Coming back to the university, I felt more determined than ever.”

After defending her Master’s thesis in physical engineering, which focused on theory and simulations, Špačková felt drawn towards experimental work – particularly technologies with applications in the life sciences. So she started a PhD studying plasmonic biosensors, which use metal nanostructures to detect biological molecules. These sensors exploit surface plasmon resonance, in which electrons on a metal surface oscillate in response to light. When a biological molecule binds to the sensor’s surface, it causes a measurable shift in the resonant frequency, thus signalling the molecule’s presence.

Špačková’s PhD thesis on detecting extremely low concentrations of molecules won her the 2015 Werner von Siemens Award for Excellence. One of the concepts she’d worked on led to her developing a working prototype of a sensor able to detect cancer biomarkers. “I was really happy that I went with my dream,” says Špačková, adding that she “moved from the realm of theory to hands-on experimentation and eventually built a box with a functional button designed to serve a greater good.”

A serendipitous discovery

After her PhD, Špačková wanted to broaden her perspective by working abroad. Fortunately, she found a postdoc matching her interests in the group led by Christoph Langhammer at Chalmers so she and her young family moved there.

The project focused on nano-plasmonic biosensing again, but in a novel configuration, by combining it with nanofluidics. This involves studying fluids confined in nanoscale structures – a technology Špačková had not worked with before. “I had this playground of new toys that I had never seen in my life,” she says. “It was exciting. I learned a lot. I was experiencing a new part of the universe.”

Early on in her project, a colleague showed her a strange optical effect he was seeing in his devices. Špačková decided to investigate, developing a theory of how biomolecules inside nanofluids interact with light. To her surprise, her calculations suggested it should be possible to see even an individual biomolecule.

Into the nano-universe

These nanometre-sized objects had never been seen using traditional optical microscopy, but repeated calculations convinced Špačková she was onto something. With support from her supervisor and help from other team members, she equipped the lab with instruments needed to pursue the theory.

The trick was to put a biomolecule inside a nanochannel on a chip. Although biomolecules scatter too little light to be seen directly, interference with the light scattered by the nanochannel creates a much higher contrast. By subtracting an image of the empty channel from one with the biomolecule inside, the resulting interfered light reveals the presence of the molecule.

While other optical microscopy methods have enabled scientists to see single molecules before, they usually involve labelling these objects with fluorescent markers or fixing them to a surface – both of which can affect their properties and behaviours. The unique advantage of Špačková’s technique – named “nanofluidic scattering microscopy” (NSM) – is that it unveils single molecules in their natural state, moving freely in real time.

Looking at life

When Špačková succeeded in getting the microscope to work in 2019, it was not just a remarkable technological achievement; it was also an exciting step towards potential applications. Her invention could deepen biologists’ understanding of living processes, by showing how biomolecules move around and interact. It could also accelerate drug development, by illuminating how drug candidates interact with cell components.

Recognizing these possibilities, Špačková and her colleagues founded a start-up called Envue Technologies, with support from the Chalmers Ventures incubator, to develop and commercialize NSM instruments. Although Špačková returned to Czechia in 2022, after receiving a grant from the Czech Science Foundation and Marie-Curie Fellowship, she remains a scientific adviser to the company. “This is a super exciting experience for academics,” she says. “You get in touch with the real world and potential end users of your technology.”

Earlier this year, it was announced that Špačková will be setting up one of three new Dioscuri Centres of Scientific Excellence in the Czech Republic – an initiative of the Max Planck Society to support outstanding scientists establishing research groups.

In this programme, Špačková will be working in partnership with a German research group studying molecular transport in cells. The ultimate goal is to develop imaging tools based on NSM, for this application. “We would really like to dive inside this biological nano-universe and observe it in ways that were not possible before,” says Špačková.

It is also the first time that Špačková will be a team leader, and she is looking forward to embracing the challenge of this new role. Reflecting on her career so far, she emphasizes that everyone has a unique path and must tune in to what is right for them. “I think there’s a subtle art of deciding whether to give up or keep going,” she says. “You have to be very careful with the decision. In my case, following the heart worked.”

The post Peering inside the biological nano-universe: Barbora Špačková on unveiling individual molecules moving in real time appeared first on Physics World.

]]>
Feature Barbora Špačková on moving from theoretical to experimental physics and the joy of refining her technology for real-world applications https://physicsworld.com/wp-content/uploads/2024/07/2024-07-Careers-Spackova-portrait.png newsletter
Kirigami cubes make a novel mechanical computer https://physicsworld.com/a/kirigami-cubes-make-a-novel-mechanical-computer/ Fri, 09 Aug 2024 08:39:42 +0000 https://physicsworld.com/?p=115963 New device can store, retrieve and erase data

The post Kirigami cubes make a novel mechanical computer appeared first on Physics World.

]]>
A new mechanical computer made from an array of rigid, interconnected plastic cubes can store, retrieve and erase data simply by stretching the array and manipulating the position of the cubes. The device’s construction is inspired by the ancient Japanese art of paper cutting, or kirigami, and its designers at North Carolina State University in the US say that more advanced versions could be used in stable, high-density memory and logic computing; in information encryption and decryption; and to create displays based on three-dimensional units called voxels.

Mechanical computers were first developed in the 19th century and do not contain any electronic components. Instead, they perform calculations with levers and gears. We don’t often hear about such contraptions these days, but researchers led by NC State mechanical and aerospace engineer Jie Yin are attempting to bring them back due to their stability and their capacity for storing complex information.

A periodic array of computing cubes

The NC State team’s computer comprises a periodic array, or metastructure, of 64 interconnected polymer cubes, each measuring 1 cm on a side and grouped into blocks. These cubes are connected by thin hinges of elastic tape that can be used to move the cubes either physically or by means of a magnetic plate attached to the cubes’ top surfaces. When the array is stretched in one direction, it enters a multi-stable state in which cubes can be pushed up or down, representing a binary 1 and 0 respectively. Thus, while the unit cells are interconnected, each cell acts as an independent switch with two possible states – in other words, as a “bit” familiar from electronic computing.

If the array is then compressed, the cubes lock in place, fixing them in the 0 or 1 state and allowing information to be stored in a stable way. To change these stored bits, the three-dimensional structure can be re-stretched, returning it to the multi-stable state in which each unit cell becomes editable.

Ancient inspiration

The new device was inspired by Yin and colleagues’ previous work, which saw them apply kirigami principles to shape-morphing matter. “We cut a thick plate of plastic into connected cubes,” Yin explains. “The cubes can be connected in multiple ways in a closed loop so that they can transform from 2D plates to versatile 3D voxelated structures.”

These transformations, he continues, were based on rigid rotations – that is, ones in which neither the cubes nor the hinges deform. “We were originally thinking of storing elastic energy in the hinges so that they could lead to different shape changes,” he says. “With this came the bistable unit cell idea.”

Yin says that one of the main challenges involved in turning the earlier shape-morphing system into a mechanical computer was to work out how to construct and connect the unit cells. “In our previous work, we made use of an ad-hoc design, but we could not directly extend this to this new work,” he tells Physics World. “We finally came up with the solution of using four cubes as a base unit and [assembling] them in a hierarchical way.”

While the platform has several possible applications, Yin says one of the most interesting would be in three-dimensional displays. “Each pop-up cube acts as a voxel with a certain volume and can independently be pushed up and remain stable,” he says. “These properties are useful for interactive displays or haptic devices for virtual reality.”

Computing beyond binary code

The current version of the device is still far from being a working mechanical computer, with many improvements needed to perform even simple mathematical operations. However, team member Yanbin Li, a postdoctoral researcher at NC State and first author of a Science Advances paper on the work, points out that the density of information it can store is relatively high. “Using a binary framework – where cubes are either up or down – a simple metastructure of nine functional units has more than 362 000 possible configurations,” he explains.

A further advantage is that a functional unit of 64 cubes can take on a wide variety of architectures, with up to five cubes stacked on top of each other. These novel configurations would allow for the development of computing that goes well beyond binary code, Li says.

In the nearer term, Li suggests that the cube array could allow users to create three-dimensional versions of mechanical encryption or decryption. “For example, a specific configuration of functional units could serve as a 3D password,” he says.

The post Kirigami cubes make a novel mechanical computer appeared first on Physics World.

]]>
Research update New device can store, retrieve and erase data https://physicsworld.com/wp-content/uploads/2024/08/09-08-2024-Cube-computer-scaled.jpg
Abdus Salam: celebrating a unifying force in global physics https://physicsworld.com/a/abdus-salam-celebrating-a-unifying-force-in-global-physics/ Thu, 08 Aug 2024 13:41:55 +0000 https://physicsworld.com/?p=115957 Our podcast guests are Claudia de Rham and Ian Walmsley at Imperial College

The post Abdus Salam: celebrating a unifying force in global physics appeared first on Physics World.

]]>
This podcast explores the extraordinary life of the Pakistani physicist Abdus Salam, who is celebrated for his ground-breaking theoretical work and for his championing of physics and physicists in developing countries.

In 1964, he founded the Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste, Italy – which supports research excellence worldwide with a focus on physicists in the developing world. In 1979 Salam shared the Nobel Prize for Physics for his work on the unification of the weak and electromagnetic interactions.

Salam spent most of his career at Imperial College London and the university is gearing up to celebrate the centenary of his birth in January 2026. In this episode of the Physics World Weekly podcast, Imperial physicists Claudia de Rham and Ian Walmsley look back on the extraordinary life of Salam – who died in 1996. They also talk about the celebrations at Imperial College.

Image courtesy: AIP Emilio Segrè Visual Archives, Physics Today Collection

The post Abdus Salam: celebrating a unifying force in global physics appeared first on Physics World.

]]>
Podcasts Our podcast guests are Claudia de Rham and Ian Walmsley at Imperial College https://physicsworld.com/wp-content/uploads/2024/08/8-8-24-Abdus-Salam-list.jpg newsletter
Physicists detect nuclear decay in the recoil of a levitating sphere https://physicsworld.com/a/physicists-detect-nuclear-decay-in-the-recoil-of-a-levitating-sphere/ Wed, 07 Aug 2024 15:09:54 +0000 https://physicsworld.com/?p=115940 Principle of momentum conservation makes it possible to "see" individual alpha particles leaving a micron-scale silica bead

The post Physicists detect nuclear decay in the recoil of a levitating sphere appeared first on Physics World.

]]>
Physicists in the US have detected the nuclear decay of individual helium nuclei by embedding radioactive atoms in a micron-sized object and measuring the object’s recoil as a particle escapes from it. The technique, which is an entirely new way of studying particles emitted via nuclear decay, relies on the principle of momentum conservation. It might also be used to detect other neutral decay products, such as neutrinos and particles that could be related to dark matter and might escape detection by other means.

The conservation of momentum is a fundamental concept in physics, together with the conservation of energy and the conservation of mass. The principle is that as momentum (mass multiplied by velocity) can be neither created nor destroyed, the total amount of it must remain constant – as described by Newton’s laws of motion.

Outgoing decay product will exert a backreaction

Physicists led by David Moore of Yale University have now used this principle to determine when a radioactive atom emits a single helium nucleus (or alpha particle) as it decays. The idea is as follows: if the radioactive atom is embedded in a larger object, the outgoing decay product will exert a back-reaction on the object, making it recoil in the opposite direction. “This is similar to throwing a ball while on a skateboard,” explains team member Jiaxiang Wang. “After the ball has been thrown, the skateboard will slowly roll backward.”

The backreaction on a large object from just a single nucleus inside it would normally be too tiny to detect, but the Yale researchers managed to do it by precisely measuring the object’s motion using the light scattered from it. Such a technique can gauge forces as small as 10-20 N and accelerations as tiny as 10-7 g, where g is the local acceleration due to the Earth’s gravitational pull.

“Our technique allows us to determine when a radioactive decay has occurred and how large the momentum the decaying particle exerted on the object,” Wang says. “Momentum conservation ensures that the momentum carried by the object and the emitted alpha particle are the same. This means that measuring the object’s recoil provides us with information on the decay products.”

Optical tweezer technique

In their experiment, Moore, Wang and colleagues embedded several tens of radioactive lead-212 atoms in microspheres made of silica. They then levitated one microsphere at a time using the forces generated by a focused laser beam. This technique is known as an optical tweezer and is routinely employed to hold and move nano-sized objects. The researchers recorded recoil measurements over a period of two to three days as the lead-212 (which has a half-life of 10.6 hours) decayed to the stable isotope lead-208 through the emissions of alpha and beta particles (electrons).

According to Wang, the study is an important proof of principle, demonstrating conclusively that a single nuclear decay can be detected when it occurs in a much larger object. But the team also hopes to put it to good use for other applications. “We undertook this work as a first step towards directly measuring neutrinos as decay products,” Wang explains. “Neutrinos are central to many open questions in fundamental physics but are extremely difficult to detect. The technique we have developed could be a completely new way to study them.”

As well as detecting neutrinos, the new method, which is detailed in Physical Review Letters, could also be of interest for nuclear forensics. For example, it could be used to test whether dust particles captured from the environment contain potentially harmful radioactive isotopes.

The Yale researchers now plan to extend their technique to smaller silica spheres, which have better momentum sensitivity. “These smaller objects will allow us to sense the momentum kick from a single neutrino,” Wang tells Physics World. “Eventually, an approach like ours might also be applied to large arrays of spheres to sense other types of previously undetected, rare decays.”

The post Physicists detect nuclear decay in the recoil of a levitating sphere appeared first on Physics World.

]]>
Research update Principle of momentum conservation makes it possible to "see" individual alpha particles leaving a micron-scale silica bead https://physicsworld.com/wp-content/uploads/2024/08/Upload_IOP.jpg newsletter1
Radiation monitoring keeps track of nuclear waste contamination https://physicsworld.com/a/radiation-monitoring-keeps-track-of-nuclear-waste-contamination/ Wed, 07 Aug 2024 11:00:36 +0000 https://physicsworld.com/?p=115869 PhD studentship available to develop technologies for in situ characterization of nuclear fission products

The post Radiation monitoring keeps track of nuclear waste contamination appeared first on Physics World.

]]>
Nuclear reactors – whether operational or undergoing decommissioning – create radioactive waste. Management of this waste is a critical task and this practice has been optimized over the past few decades. Nevertheless, strategies for nuclear waste disposal employed back in the 1960s and 70s were far from ideal, and the consequences remain for today’s scientists and engineers to deal with.

In the UK, spent nuclear fuel is typically stored in ponds or water-filled silos. The water provides radiation shielding, as well as a source of cooling for the heat generated by this material. In England and Wales, the long-term disposal strategy involves ultimately transferring the waste to a deep geological disposal facility, while in Scotland, near-surface disposal is considered appropriate.

The problem, however, is that some of the legacy storage sites are many decades old and some are at risk of leaking. And when this radioactive waste leaks it can contaminate surrounding land and groundwater. The potential for radioactive contamination to get into the wet environment is an ongoing problem, particularly at legacy nuclear reactor sites.

“The strategy for waste storage 50 years ago was different to that used now. There wasn’t the same consideration for where this waste would be disposed of long term,” explains Malcolm Joyce, distinguished professor of nuclear engineering at Lancaster University. “A common assumption might have been ‘well it’s going to go in the ground at some point’ whereas actually, disposal is a necessarily rigorous, regulated and complicated programme.”

In one example, explains Joyce, radioactive waste was stored temporarily in drums and sited in near-surface spaces. “But the drums have corroded over time and they’ve started to deteriorate, putting containment at risk and requiring secondary containment protection,” he says. “Elsewhere, some of the larger ponds in which spent nuclear fuel was stored are also deteriorating and risking loss of containment.”

Problematic radioisotopes

The process of nuclear fission generates a range of radioactive products with a variety of half-lives and levels of radiotoxicity – a complex factor governed by their chemistry and radioactivity behaviours. One contaminant of particular concern is strontium-90 (Sr-90), a relatively high-yield fission product found in significant amounts in spent nuclear fuel and other radioactive waste.

Sr-90 emits relatively high-energy (0.6 MeV) beta radiation, has a relatively short half-life (about 30 years) and is water soluble, enabling it to migrate with groundwater. The major hazard, however, is its potential for uptake into biological systems. As a group 2 element similar to calcium, Sr-90 is a “bone seeker” that’s taken up by the bones and remains there, increasing the risk of leukaemia and bone cancer.

“The other challenge with strontium is that its daughter is even worse in radiotoxicity terms,” explains Joyce. Sr-90 decays into yttrium-90 (Y-90), which emits very high-energy beta radiation (2.2 MeV) that can penetrate up to 3.5 mm into aluminium. “The engineering challenge associated with Y-90 was first encountered at Three Mile Island, when they realised that the energy of the beta particles from it was sufficiently high to penetrate their personal protective equipment,” he notes.

Do not disturb

These potential biological hazards make it imperative to monitor potential radioactive contamination and address any leakages, and they also provide a basis for in situ monitoring of such leaks. One approach is to extract water or earth samples, often via boreholes, for offsite analysis in a laboratory. Unfortunately, what’s measured in the lab could be completely different to the radiological environment that you’re trying to understand. “This is an example that highlights the fact that trying to measure something actually changes the thing you’re trying to measure,” notes Joyce.

When undisturbed, Sr-90 and Y-90 reach secular equilibrium, a quiescent state in which Y-90 is produced at the same rate as its decay. Y-90 can tend to react with oxygen in the environment, dependent on pH, to form insoluble products such as yttrium oxide, known as yttria, and colloidal carbonate complexes that precipitate out of the surrounding water environment and can combine with calcium and silicon in the surrounding geology.

“There’s a steady-state radioactivity environment because it’s in secular equilibrium, and also a steady-state geochemistry environment associated with how much yttria is in suspension, settled out or stuck in the geology around it,” says Joyce. “But should it be disturbed by manual intervention this might lift plumes of material, redistributing the radioactivity in the area you’re working in. The risk associated with that is different to the risk assessments associated with the quiescent environment.”

Lancaster University Engineering

Joyce and his team are taking a different approach, by developing a method to monitor radioactive contamination in situ. The technique exploits the bremsstrahlung radiation generated when high-energy beta particles emitted by Sr-90 and Y-90 interact with their surrounding environment and slow down. And while beta particles only travel a few millimetres before they can no longer be detected, bremsstrahlung radiation comprises far more penetrating X-ray photons that can be measured at much greater distances.

The researchers are also using an astrophysical technique to determine the distribution of the measured radioactivity. The approach uses the Moffat point spread function – developed back in 1969 to find the distribution of galaxies – to analyse the depth and spread of the contamination and, importantly, how it is changing over time.

“If the depth of these radioactive features changes, that tells you whether things are getting worse or better,” Joyce explains. “Put simply, if they’re getting nearer to the surface, that’s probably not something that you want.”

The PhD project

The team has now demonstrated that bremsstrahlung measurements can discriminate the combined Sr-90/Y-90 beta emission from gamma radiation emitted by caesium-137 (another high-yield fission product) during in situ groundwater monitoring. The next goal is to distinguish emissions from the two beta emitters.

As such, Joyce has secured funding from the UK’s Nuclear Decommissioning Authority for a PhD studentship to develop methods to detect and quantify levels of Sr-90 and Y-90 in contaminated land and aqueous environments. The project, based at Lancaster University, also aims to understand the accuracy with which the two radioisotopes can be separated and investigate their respective kinetics.

The first task will be to determine whether bremsstrahlung emissions can discriminate between these two sources of radioactive contamination. Bremsstrahlung is produced in a continuous energy spectrum, with a maximum corresponding to the maximum energy of the beta particles (which also have a continuous energy distribution). Joyce points out that, while it is quite difficult to pinpoint this maximum, it could enable deconvolution of the contributions from Sr-90 and Y-90 to the bremsstrahlung spectrum.

It may also be possible to distinguish the two radioisotopes via direct detection of the beta particles, or a completely different solution may emerge. “Never say never with a PhD,” says Joyce. “There may be a better way of doing it that we’re not aware of yet.”

Joyce emphasizes the key role that such radiation monitoring techniques could play in nuclear decommissioning projects, such as the clean-up of the Dounreay shaft, for example. The 65-m deep shaft and silo were historically used to store radioactive waste from the Dounreay nuclear reactor in Scotland. This waste now needs to be retrieved, repackaged and stored somewhere isolated from people, animals and plants.

As the facility is emptied of radioactive material, the radiological environment will change. Ideally, it will become safer, and uncertainty reduced, with any changes potentially able to inform planning. “With this new technology we’ll be able to monitor radiation levels as the programme progresses, to understand exactly what’s happening in the environment as things are being cleaned up,” explains Joyce.

“The world would be a better place as a result of the ability to make these measurements, and they could inform how similar challenges are dealt with the world over,” Joyce tells Physics World. “If you asked me ‘why should somebody do this PhD?’, altruistically, it’s about taking us closer to the point where our grandchildren don’t have to worry about these things – that’s what’s important.”

Apply now

To find out more about the PhD studentship, which is fully funded for eligible UK students, contact Malcolm Joyce at m.joyce@lancaster.ac.uk. Candidates interested in applying should send a copy of their CV together with a personal statement or covering letter addressing their background and suitability for this project before the closing date of 31 August 2024.

The post Radiation monitoring keeps track of nuclear waste contamination appeared first on Physics World.

]]>
Analysis PhD studentship available to develop technologies for in situ characterization of nuclear fission products https://physicsworld.com/wp-content/uploads/2024/08/Dounreay-iStock_SteveAllenPhoto.jpg newsletter
Paradigm shifts: positivism, realism and the fight against apathy in the quantum revolution https://physicsworld.com/a/paradigm-shifts-positivism-realism-and-the-fight-against-apathy-in-the-quantum-revolution/ Wed, 07 Aug 2024 10:00:31 +0000 https://physicsworld.com/?p=115542 Jim Baggott reviews Escape from Shadow Physics: the Quest to End the Dark Ages of Quantum Theory by Adam Forrest Kay

The post Paradigm shifts: positivism, realism and the fight against apathy in the quantum revolution appeared first on Physics World.

]]>
Science can be a messy business. Scientists caught in the storm of a scientific revolution will try to react with calm logic and reasoning. But in a revolution the stakes are high, the atmosphere charged. Cherished concepts are abandoned as troubling new notions are cautiously embraced. And, as the paradigm shifts, the practice of science is overlaid with passionate advocacy and open hostility in near-equal measure. So it was – and, to a large extent, still is – with the quantum revolution.

Niels Bohr insisted that quantum theory is the result of efforts to describe a fundamentally statistical quantum world using concepts stolen from classical physics, which must therefore be interpreted “symbolically”. The calculation of probabilities, with no reference to any underlying causal mechanism that might explain how they arise, is the best we can hope for.

In the heat of the quantum revolution, Bohr’s “Copenhagen interpretation” was accused of positivism, the philosophy that valid knowledge of the physical world is derived only from direct experience. Albert Einstein famously disagreed, taking the time to explore alternatives more in keeping with a realist metaphysics, with a “trust in the rational character of reality and in its being accessible, to some extent, to human reason”, that had served science for centuries. Lest there be any doubt, Adam Forrest Kay’s Escape from Shadow Physics: the Quest to End the Dark Ages of Quantum Theory demonstrates that the Bohr–Einstein debate remains unresolved, at least to anybody’s satisfaction, and continues to this day.

Escape from Shadow Physics is a singular addition to the popular literature on quantum interpretations. Kay holds PhDs in both literature and mathematics and is currently a mathematics postdoc at the Massachusetts Institute of Technology. He stands firmly in Einstein’s corner, and his plea for a return to a realist programme is liberally sprinkled with passionate advocacy and open hostility in near-equal measure. He writes with the zeal of a true quantum reactionary.

Like many others before him, in arguing his case Kay needs first to build a monstrous, positivist Goliath that can be slain with the slingshot of realist logic and reasoning. This means embracing some enduring historical myths. These run as follows. The Bohr–Einstein debate was a direct confrontation between the subjectivism of the positivist and the objectivism of the realist. Bohr won the debate by browbeating the stubborn, senile and increasingly isolated Einstein into submission. Acting like some fanatical priesthood, physicists of Bohr’s church – such as Wolfgang Pauli, Werner Heisenberg and Léon Rosenfeld – shouted down all dissent, establishing the Copenhagen interpretation as a dogmatic orthodoxy.

Historical scientific myths are not entirely wrong, and typically hold some grains of truth. Rivals to the Copenhagen view were indeed given short shrift by the “Copenhagen hegemony”. Pauli sought to dismantle Louis de Broglie’s “pilot wave” interpretation soon after it was presented in 1927. He went on to dismiss its rediscovery by David Bohm in 1952 as “shadow physics beer-idea wish dreams”, and “not even new nonsense”. Rosenfeld dismissed Hugh Everett III’s “many worlds” interpretation of 1957 as “hopelessly wrong ideas”.

But Kay is not content with the myth as it is familiarly told, and so seeks to deepen it. He confers on Bohr “the charisma of the hypnotist, the charisma of the cult leader”, adding that “the Copenhagen group was, in a very real sense, a personality cult, centred on the special and wise Bohr”. Prosecuting such a case requires a selective reading of science history, snatching quotations where they fit the narrative, ignoring others where they don’t. In fact, Bohr did not deny objective reality, or the reality of electrons and atoms. In interviews conducted shortly before his death in 1962, Bohr reaffirmed that his core principle of “complementarity” (of waves and particles, for example) was “the only possible objective description”. Heisenberg, in contrast, was much less cautious in his use of language and makes an easier target for anti-positivist ire.

It can be argued that the orthodoxy, such as it is, is not actually based on philosophical pre-commitments. The post-war Americanization of physics drove what were judged to be pointless philosophical questions about the meaning of quantum theory to the fringes. Aside from those few physicists and philosophers who continued to nag at the problem, the majority of physicists just got on with their calculations, completely unconcerned about what the theory was supposed to mean. They just didn’t care.

As Bohm explained: “Everybody plays lip service to Bohr, but nobody knows what he says. People then get brainwashed into saying Bohr is right, but when the time comes to do their physics, they are doing something different.” Many who might claim to follow Bohr’s “dogma” satisfy their physical intuitions by continuing to think like Einstein.

Anton Zeilinger, who shared the 2022 Nobel Prize for Physics for his work on quantum entanglement and quantum information science, confessed that even physicists working in this new field consider foundations to be a bit dodgy: “We don’t understand the reason why. Must be psychological reasons, something like that, something very deep.” Kay admits this much when he writes: “Yes, many people think the debate is over and Bohr won, but that is actually a social phenomenon.” In other words, the orthodoxy is not philosophical, it is sociological. It has very little to do with Bohr and the Copenhagen interpretation. In truth, Kay is fighting for attention against the apathy and indifference characteristic of an orthodox mainstream physics, or what Thomas Kuhn called “normal science”.

As to how a modern-day realist programme might be pursued, Kay treats us to some visually suggestive experiments in which oil droplets follow trajectories determined by wave disturbances on the surface of the oil bath on which they move. He argues that such “quantum hydrodynamic analogues” show us that the pilot-wave interpretation merits much more attention than it has so far received. But while these analogues are intuitively appealing, the so-called “quantization” involved is as familiarly classical as musical notes generated by string or wind instruments. And, although such analogues may conjure surprising trajectories and patterns, they cannot conjure Planck’s constant. Or quantum entanglement.

But the pilot-wave interpretation demands a hefty trade-off. It features precisely the non-local, “peculiar mechanism of action at a distance” of the kind that Einstein abhorred, and which discouraged his own exploration of pilot waves in 1927. In an attempt to rescue the possibility that reality may yet be local, Kay reaches for a loophole in John Bell’s famous theorem and inequality. Yet he overlooks the enormous volume and variety of experiments that have been performed since the early 1980s, including tests of an inequality devised by the Nobel-prize-winning theorist Anthony Leggett that explicitly close the loophole he seeks to exploit.

Escape from Shadow Physics is a curate’s egg. Those readers who would condemn Bohr and the Copenhagen interpretation, for whatever reasons of their own, will likely cheer it on. Those looking for balanced arguments more reasoned than diatribe will likely be disappointed. Despite an extensive bibliography, Kay commits some curious sins of omission. But, while the journey that Kay takes may be flawed, there is yet sympathy for his destination. The debate does remain unresolved. Faced with the mystery of entanglement and non-locality, Bohr’s philosophy offers no solace. Kay (quoting a popular textbook) asks that we consider future generations in possession of a more sophisticated theory, who wonder how we could have been so gullible.

  • 2024 Weidenfeld & Nicolson 496pp £25 hb

The post Paradigm shifts: positivism, realism and the fight against apathy in the quantum revolution appeared first on Physics World.

]]>
Opinion and reviews Jim Baggott reviews Escape from Shadow Physics: the Quest to End the Dark Ages of Quantum Theory by Adam Forrest Kay https://physicsworld.com/wp-content/uploads/2024/07/2024-07-Baggott-quantum-entanglement-1185114379-iStock_Inkoly.jpg newsletter
First patients treated using minibeam radiation therapy https://physicsworld.com/a/first-patients-treated-using-minibeam-radiation-therapy/ Tue, 06 Aug 2024 12:00:49 +0000 https://physicsworld.com/?p=115854 Minibeam radiotherapy using a clinical orthovoltage unit successfully treats two patients

The post First patients treated using minibeam radiation therapy appeared first on Physics World.

]]>
Spatially fractionated radiotherapy is a novel cancer treatment that uses a pattern of alternating high-dose peaks and low-dose valleys to deliver a nonuniform dose distribution. Numerous preclinical investigations have demonstrated that by shrinking the peaks and valleys to submillimetre dimensions, the resulting microbeams confer extreme normal tissue tolerance, enabling delivery of extremely high peak doses and providing excellent tumour control.

The technique has not yet, however, been used to treat patients. Most preclinical studies employed synchrotron X-ray sources, which deliver microbeams at ultrahigh dose rates but are not widely accessible. Another obstacle is that these extremely narrow beams (100 µm or less) are highly sensitive to any motion during irradiation, which can blur the pattern of peak and valley doses.

Instead, a team at the Mayo Clinic in Rochester, Minnesota, is investigating the clinical potential of minibeam radiation therapy (MBRT), which employs slightly wider beams (500 µm or more) spaced by more than 1000 µm. Such minibeams still provide high normal tissue sparing and tumour control, but their larger size and spacing makes them less sensitive to motion. Importantly, minibeams can also be generated by conventional X-ray sources with lower dose rates.

Michael Grams and colleagues have now performed the first patient treatments using MBRT. Writing in the International Journal of Radiation Oncology, Biology, Physics, they describe the commissioning of a clinical radiotherapy system for MBRT and report on the first two patients treated.

Minibeam delivery

To perform MBRT, the researchers adapted the Xstrahl 300, a clinical orthovoltage unit with 180 kVp output. “Because minibeam radiotherapy uses very narrow beams of radiation spaced very closely together, it requires low-energy orthovoltage X-rays,” Grams explains. “Higher-energy X-rays from linear accelerators would scatter too much and blur the peaks and valleys together.”

The team used cones with diameters between 3 and 10 cm to define the field size and create homogeneous circular fields. This output was then split into minibeams using tungsten collimators with 0.5 mm wide slits spaced 1.1 mm apart.

Commissioning measurements showed that the percentage depth dose decreased gradually with depth, reaching 50% somewhere between 3.5 and 4 cm. Peak-to-valley ratios were highest at the surface and inversely related to cone size. Peak dose rates at 1 cm depth ranged from 110 to 120 cGy/min.

The low dose rate of the orthovoltage system means that treatment times can be quite long and patient motion may be an issue. To mitigate motion effects, the researchers created 3D printed collimator holders that conform to the patient’s anatomy. These holders are fixed to the patient, such that any motion causes the patient and collimator to move together, maintaining the spatial separation of the peak and valley doses.

“This treatment had never been delivered to a human before, so we had to figure out all of the necessary steps in order to do it safely and effectively,” says Grams. “The main challenge is patient motion, which we solved by attaching the collimator directly to the patient.”

First-in-human treatments

The team treated two patients with MBRT. The first had a large (14x14x11 cm) axillary tumour that was causing severe pain and restricted arm motion, prompting the decision to use MBRT to shrink the tumour and preserve normal tissue tolerance for future treatments. He was also most comfortable sitting up, a treatment position that’s only possible using the orthovoltage unit.

The second patient had a 7x6x3 cm ear tumour that completely blocked his external auditory canal, causing hearing loss, shooting pain and bleeding. He was unable to undergo surgery due to a fear of general anaesthesia and instead was recommended MBRT to urgently reduce pain and bleeding without compromising future therapies.

“These patients had very few treatment options that the attending physician felt would actually help mitigate their symptoms,” explains Grams. “Based on what we learned from our preclinical research, they were felt to be good candidates for MBRT.”

Both patients received two daily MBRT fractions with a peak dose of 1500 cGy at 1 cm depth, using the 10 cm cone for patient 1 and the 5 cm cone for patient 2. The radiation delivery time was 11.5 or 12 min per fraction, with the second fraction delivered after rotating the collimator by 90°.

Treatment response to minibeam radiotherapy

Prior to treatment, the collimator was attached to the patient and a small piece of Gafchromic film was placed directly on the tumour for in vivo dosimetry. For both patients, the films confirmed the pattern of peak and valley doses, with no evidence of dose blurring.

For patient 1, the measured peak and valley doses were 1900 and 230 cGy, respectively. The expected doses (based on commissioning measurements) were 2017 and 258 cGy, respectively. Patient 2 had measured peak and valley doses of 1800 and 180 cGy, compared with expected values of 1938 and 248 cGy.

Both patients exhibited positive clinical responses to MBRT. Six days after his second treatment, patient 1 reported resolution of pain and improved arm motion. Three weeks later, the tumour continued to shrink and his full range of motion was restored. Despite the 10 cm cone not fully encompassing the large tumour, a uniform decrease in volume was still observed.

After one treatment, patient 2 had much reduced fluid leakage, and six days later, his pain and bleeding had completely abated and his hearing improved. At 34 days after MBRT, he continued to be asymptomatic and the lesion had completely flattened. Pleased with the outcome, the patient was willing to reconsider the recommended standard-of-care resection.

“The next step is a formal phase 1 trial to determine the maximum tolerated dose of minibeam radiotherapy,” Grams tells Physics World. “We are also continuing our preclinical work aimed at combinations of MBRT and systemic therapies like immunotherapy and chemotherapy drugs.”

The post First patients treated using minibeam radiation therapy appeared first on Physics World.

]]>
Research update Minibeam radiotherapy using a clinical orthovoltage unit successfully treats two patients https://physicsworld.com/wp-content/uploads/2024/08/6-08-24-MBRT-fig2-new.jpg newsletter1
‘Event-responsive’ electron microscopy focuses on fragile samples https://physicsworld.com/a/event-responsive-electron-microscopy-focuses-on-fragile-samples/ Tue, 06 Aug 2024 08:14:35 +0000 https://physicsworld.com/?p=115884 Pharmaceutical and catalysis research could benefit from new technique

The post ‘Event-responsive’ electron microscopy focuses on fragile samples appeared first on Physics World.

]]>
A new scanning transmission electron microscope (STEM) technique that modulates the electron beam in response to the scattering rate allows images to be formed with the fewest electrons possible. The researchers hope their “event-responsive electron microscopy“ could be used on fragile samples that are easily damaged by electron beams. The team is now working to implement their imaging paradigm with other microscopy techniques.

First developed in the 1930s, transmission electron microscopes have been invaluable for exploring almost all branches of science at tiny scales. These instruments rely on the fact that electrons can have far shorter de Broglie wavelengths than optical photons and hence can observe much finer details. Visible light microscopes cannot normally resolve features smaller than about 200 nm, but electron imaging can often achieve resolutions well below 0.1 nm. However, the higher energy of these electrons makes them more damaging to samples than light. Researchers must therefore keep the number of electrons scattered from fragile sample to the absolute minimum needed to build up a clear image.

In a STEM, an image is created by rapidly scanning a focused beam of electrons across a sample in a grid of pixels. Most of these electrons pass straight through the sample, but a small percentage are scattered sharply by collisions. Detectors that surround the beam path record these scattering events. The electron scattering rate from a particular point tells microscopists the density around that point, and thereby allows them to reconstruct an image of the sample.

Unnecessary radiation damage

Normally, the same number of incident electrons is fired at each pixel and the number of scattered electrons is counted. To create enough collisions at weakly scattering regions to resolve them properly, strongly scattering regions are exposed to far more incident electrons than necessary. As a result, samples may suffer unnecessary radiation damage.

In the new work, electron microscopists led by Jonathan Peters and Lewys Jones at Trinity College Dublin, together with Bryan Reed of Integrated Dynamic Electron Solutions in the US and colleagues in the UK and Japan, inverted the traditional measurement protocol by measuring the time required to achieve a fixed number of scattered electrons from every pixel. Jones offers an analogy: “If you look at the weather forecast on TV you see the rainfall in millimetres per hour,” he says; “If you look at how that’s measured by weather forecasters they go and put a beaker outside in the rain and, one hour later, they see how much is in the beaker…If I ask you how hard it’s raining, you’re going to go outside, stick your hand out and see how long it takes for, say, three drops to hit your hand…After you’ve reached some fixed [number of drops], you don’t wait for the rest of the hour in the rain.”

Event response

The researchers implemented an event-responsive microscopy protocol in which the individual scattered electrons from each pixel is recorded, and this information is fed back to the electron microscope. After the set number of scattered electrons is recorded from each individual pixel, a “beam blanker” is switched on until the end of the normal pixel waiting time. “A powerful voltage is applied to skew the beam off into the sidewall,” explains Jones. “It has the same effect of opening and closing a shutter on a camera.” This allowed the researchers to measure the scattering rate from all the sample points without subjecting any of them to unnecessary electron flux. “It’s not a slow process,” says Jones; “The image is formed in front of the user in real-time.”

The researchers used their new protocol to produce images of biologically and chemically fragile samples with little to no radiation damage. They now hope it will prove possible to produce electron micrographs of samples such as some catalysts and drug molecules that are currently obliterated by electron beams before an image can be formed. They are also exploring the protocol’s use in other imaging techniques such as electron energy loss spectroscopy and X-ray microscopy. “It will probably take a number of years for us and other groups to fully unpick what such a fundamental shift in how measurements are made will mean for all the other kinds of techniques that people use microscopes for,” says Jones.

Electron microscopy expert Quentin Ramasse of the University of Leeds is enthusiastic about the work. “It’s inventive, it’s potentially changing the way we record data in a STEM and it’s doing so in a very simple fashion. It could provide an extra tool in our arsenal to not necessarily completely remove beam damage but certainly to minimize it,” he says. “It really is [the result of] clever electronics, clever hardware and a very clever take on how to drive the motion of the probe as a function of what the sample’s response has been up to that point.”

The research is described in Science.     

The post ‘Event-responsive’ electron microscopy focuses on fragile samples appeared first on Physics World.

]]>
Research update Pharmaceutical and catalysis research could benefit from new technique https://physicsworld.com/wp-content/uploads/2024/08/6-8-24-Jonathan-Peters-performing-some-experiments-resized.jpg
Tsung-Dao Lee: Nobel laureate famed for work on parity violation dies aged 97 https://physicsworld.com/a/tsung-dao-lee-nobel-laureate-famed-for-work-on-parity-violation-dies-aged-97/ Mon, 05 Aug 2024 15:28:55 +0000 https://physicsworld.com/?p=115888 In the 1950s Lee proposed that "parity" is violated by the weak force

The post Tsung-Dao Lee: Nobel laureate famed for work on parity violation dies aged 97 appeared first on Physics World.

]]>
The Chinese-American particle physicist Tsung-Dao Lee died on 4 August at the age of 97. Lee shared half of the 1957 Nobel Prize for Physics with Chen Ning Yang for their theoretical work that overturned the notion that parity is conserved in the weak force – one of the four fundamental forces of nature. Known as “parity violation”, it was proved experimentally by, among others, Chien-Shiung Wu.

Born on on 24 November 1926 in Shanghai, Lee began studying physics in 1943 at the National Chekiang University (now known as Zhejiang University) and, later, at National Southwest Associated University in Kunming. In 1946 Lee moved to the US to the Univeristy of Chicago on a Chinese government fellowship, doing a PhD under the guidance of Enrico Fermi, which he completed in 1950.

After his PhD, Lee worked at Yerkes Astronomical Observatory in Wisconsin, the University of California at Berkeley and the Institute for Advanced Study at Princeton before moving to Columbia University in 1953. Three years later, he became the youngest-ever full professor at Columbia, remaining at the university until retiring in 2011.

Looking in the mirror

It was at Columbia where Lee did his Nobel-prize-winning work on parity, which is a property of elementary particles that expresses their behaviour upon reflection in a mirror. If the parity of a particle does not change during reflection, parity is said to be conserved. But since the early 1950s, physicists had been puzzled by the decays of two subatomic particles, known as tau and theta.

These particles, also known as K-mesons, are identical except that the tau decays into three pions with a net parity of -1, while a theta particle decays into two pions with a net parity of +1. This puzzling observation meant that either the tau and theta are different particles or – controversially – that parity in the weak interaction is not conserved, with Lee and Yang proposing various ways to test their ideas (Phys. Rev. 104 254).

Wu, who was also working at Columbia, then suggested an experiment based on the radioactive decay of unstable cobalt-60 nuclei into nickel-60. In what became known as the “Wu experiment”, she and colleagues from the National Bureau of Standards used a magnetic field to align the cobalt nuclei with their spins parallel, before counting the number of electrons emitted in both an upward and downward direction.

Wu and her team found that far more electrons were being emitted downwards then upwards, which for parity to be conserved would be the same for both the normal state and in the mirror image. Yet when the field was reversed, as it would be in the mirror image, they found that more electrons were detected upwards, proving that parity is violated in the weak interaction.

For their work, Lee and Ning Yang shared the 1957 Nobel Prize for Physics. Then just 30, Lee was the second youngest Nobel-prize winning scientist after Lawrence Bragg, who was 25 when he shared the 1915 Nobel Prize for Physics with his father, William Henry Bragg. It has been argued that Wu should have shared the prize too for her experimental evidence of parity violation, although the story is complicated because two other groups were also working on similar experiments at the same time.

Influential physicist

Lee went on to publish several books including Particle Physics and Introduction to Field Theory in 1981 and Science and Art in 2000. As well as the Nobel prize, he was also awarded the Albert Einstein Award in 1957 and the Matteucci Medal in 1995.

In the 1980s, Lee initiated the China-US Physics Examination and Application (CUSPEA) programme, which has since helped to train hundreds of physicists. He also was instrumental in the development of China’s first high-energy accelerator, the Beijing Electron-Positron Collider, which switched on in 1989.

Robert Crease, a historian from Stony Brook University who interviewed Lee many times, said that Lee also had a significant influence on the Brookhaven National Laboratory in New York. “He did some of his Nobel work there in the summer of 1956,” says Crease. “Lee and Yang would make regular Friday-afternoon trips to the local Westhampton beach where they would draw equations in the sand. They’d also yell at each other so loudly that others could sometimes hear them down the hall.”

Later, in the 1990s, Lee also played a role in the transition of Brookhaven’s ISABELLE proton-proton collider into the Relativistic Heavy-Ion Collider. “He was a mentor to many people at Brookhaven,” Crease adds. “He was artistic too – he made many sculptures – and was funny. I was honoured when Lee asked me to sign a copy of my edited autobiography of the theorist Robert Serber, who had adored him.”

“His groundbreaking contributions to his field have left a lasting impact on both theoretical and experimental physics,” noted Columbia University President Minouche Shafik in a statement.  “He was a beloved teacher and colleague for whom generations of Columbians will always be grateful.”

At a reception in 2011 to mark Lee’s retirement, William Zajc, chair of Columbia’s physics department, noted that it was “impossible to overstate [Lee’s] influence on the department of physics, on Columbia and on the entire field of physics.”

Lee, on the other hand, noted that retirement is “like gardening”. “You may not be cultivating a new species, but you can still keep the old beautiful thing going on,” he added.

  • A memorial service in honour of Lee will be held at 9.00 a.m. (CST) on 25 August 2024 at the Tsung-Dao Lee Institute in Shanghai, Chaina, with an online stream in both English and Chinese. More information, including an invitation for colleagues to share condolences, photos or video tributes, is available on the Tsung-Dao Lee memorial website.

The post Tsung-Dao Lee: Nobel laureate famed for work on parity violation dies aged 97 appeared first on Physics World.

]]>
News In the 1950s Lee proposed that "parity" is violated by the weak force https://physicsworld.com/wp-content/uploads/2024/08/TD-Lee-at-CERN-small.jpg newsletter
Introducing Python for electrochemistry research https://physicsworld.com/a/introducing-python-for-electrochemistry-research/ Mon, 05 Aug 2024 13:21:33 +0000 https://physicsworld.com/?p=114400 Available to watch now, The Electrochemical Society, in partnership with BioLogic and  Gamry Instruments, explores the advantages of using Python in your electrochemical research

The post Introducing Python for electrochemistry research appeared first on Physics World.

]]>

To understand electrochemical behaviour and reaction mechanisms, electrochemists must analyze the correlation between current, potential, and other parameters, such as in situ information. As the experimental dataset becomes larger and the analysis task gets more complex, one may spend days sorting data, fitting models, and repeating these routing procedures. Moreover, sharing the analyzing procedure and reproducing the results can be challenging as different commercial software, parameters, and steps can be involved. Therefore, an open-source, free, and all-in-one platform for electrochemistry research is needed.

Python is an interpreted programming language that has emerged as a transformative force within the scientific community. Its syntax prioritizes readability and simplicity, allowing easy reproducing and cross-platform sharing. Furthermore, its rich ecosystem of community-provided packages enables multiple electrochemical tasks, from data analysis and visualization to fitting and simulation.

This webinar presents a general introduction to using Python for electrochemists new to programming concepts. Starting with the basic concepts, Python’s capability in electrochemistry research is demonstrated with examples, from data handling, treatment, fitting, and visualization to electrochemical simulation. Suggestions and resources on learning Python are provided.

An interactive Q&A session follows the presentation.

Zheng Weiran

Weiran Zheng is an associate professor in chemistry at the Guangdong Technion-Israel Institute of Technology (GTIIT), China. His research focuses on understanding the activation and long-term deactivation mechanisms of electrocatalysts from an atomic scale using operando techniques such as spectroscopy and surface probe microscopy. He is particularly interested in water electrolysis, ammonia electrooxidation, and sensing. His research also involves a fundamental discussion of current experimental electrochemistry for better data accountability and reproducibility. Weiran Zheng received his BS (2009) and PhD (2015) from Wuhan University. Before joining GTIIT, he worked as a visiting researcher at the University of Oxford (2012–2014) and a research fellow at the Hong Kong Polytechnic University (2016–2021).

The Electrochemical Society

 

The post Introducing Python for electrochemistry research appeared first on Physics World.

]]>
Webinar Available to watch now, The Electrochemical Society, in partnership with BioLogic and  Gamry Instruments, explores the advantages of using Python in your electrochemical research https://physicsworld.com/wp-content/uploads/2024/05/python.png
MR-guided radiotherapy: where are we now and what does the future hold? https://physicsworld.com/a/mr-guided-radiotherapy-where-are-we-now-and-what-does-the-future-hold/ Mon, 05 Aug 2024 12:00:08 +0000 https://physicsworld.com/?p=115858 Speakers at the recent AAPM Annual Meeting examined the clinical impact and future potential of MR-guided radiotherapy

The post MR-guided radiotherapy: where are we now and what does the future hold? appeared first on Physics World.

]]>
Aurora-RT MR-linac

The past few decades have seen MR-guided radiotherapy evolve from an idea on the medical physicists’ wish list to a clinical reality. At the recent AAPM Annual Meeting, experts in the field took a look at three MR-linac systems, the clinical impact of this advanced treatment technology and the potential future trajectory of MR-guided radiotherapy.

Millimetres matter

Maria Bellon from Cedars-Sinai (speaking on behalf of James Dempsey and ViewRay Systems) began the symposium with an update on the MRIdian, an MR-guided radiotherapy system that combines a 6 MV linac with a 0.35 T MRI scanner. She explained that ViewRay Systems was formed in early 2024 to save the MRIdian technology following the demise of ViewRay Technologies.

Bellon described ViewRay’s quest to minimize treatment margins – the region that’s deliberately destroyed outside of the tumour. In radiotherapy, geometric margins are necessarily added to account for microscopic disease or uncertainties. “But millimetres matter when it comes to improving outcomes for cancer patients,” she said.

The MRIdian A3i, the company’s latest platform, is designed to minimize margins and maximize accuracy using three key features: auto-align, auto-adapt and auto-target. Auto-align works by aligning a very sharp beam to high-resolution images of the soft tissues to be targeted or spared. The auto-adapt workflow begins with the acquisition of a high-resolution 3D MRI for localization. Within 30 s, it automatically performs image registration, contour mapping, predicted dose calculation, IMRT plan re-optimization, best plan selection and plan QA.

Once treatment begins, auto-targeting is employed to deal with organ motion. The treatment beam is controlled by the MR images and only turned on when the tumour lies within defined margins. Organ motion can also cause interplay effects, in which the dose distribution contains gaps or areas of overlap that result in hot and cold spots. Larger margins can worsen this effect – another reason to keep them as small as possible.

The MRIdian MR-linac

Bellon shared some clinical studies demonstrating how margins matter. The MIRAGE trial, for example, showed that 2 mm margins and MR-guided radiotherapy resulted in significantly lower toxicity for prostate cancer patients than 4 mm margins and CT guidance. Elsewhere, the multicentre