Tokyo University of Science researches complex magnetization reversal mechanism

Researchers develop a super-hierarchical and explanatory analysis of magnetization reversal that could improve the reliability of spintronics devices.

The reliability of data storage and writing speed in advanced magnetic devices depend on drastic, complex changes in microscopic magnetic domain structures. However, it is extremely challenging to quantify these changes, limiting our understanding of magnetic phenomena. To tackle this, researchers from Japan developed, using machine learning and topology, an analysis method that quantifies the complexity of the magnetic domain structures, revealing hidden features of magnetization reversal that are hardly seen by human eyes. Spintronic devices and their operation are governed by the microstructures of magnetic domains. These magnetic domain structures undergo complex, drastic changes when an external magnetic field is applied to the system. The resulting fine structures are not reproducible, and it is challenging to quantify the complexity of magnetic domain structures. Our understanding of the magnetization reversal phenomenon is, thus, limited to crude visual inspections and qualitative methods, representing a severe bottleneck in material design. It has been difficult to even predict the stability and shape of the magnetic domain structures in Permalloy, which is a well-known material studied over a century.

Addressing this issue, a team of researchers headed by Professor Masato Kotsugi from Tokyo University of Science, Japan, recently developed an AI-based method for analyzing material functions in a more quantitative manner. In their work published in Science and Technology of Advanced Materials: Methods, the team used topological data analysis and developed a super-hierarchical and explanatory analysis method for magnetic reversal processes. In simple words, “super-hierarchical” means, according to the research team, the connection between micro and macro properties, which are usually treated as isolated but, in the big scheme, contribute jointly to the physical explanation.

The team quantified the complexity of the magnetic domain structures using persistent homology, a mathematical tool used in computational topology that measures topological features of data persisting across multiple scales. The team further visualized the magnetization reversal process in two-dimensional space using principal component analysis, a data analysis procedure that summarizes large datasets by smaller “summary indices,” facilitating better visualization and analysis. As Prof. Kotsugi explains, “The topological data analysis can be used for explaining the complex magnetization reversal process and evaluating the stability of the magnetic domain structure quantitatively.” The team discovered that slight changes in the structure invisible to the human eye that indicated a hidden feature dominating the metastable/stable reversal processes can be detected by this analysis. They also successfully determined the cause of the branching of the macroscopic reversal process in the original microscopic magnetic domain structure.

In a recent study, researchers from Japan developed an analysis method, based on persistent homology, a mathematical tool, and principal component analysis, to quantify the complex changes in microscopic magnetic domain structures that are hard to detect with the naked eye.

 

The novelty of this research lies in its ability to connect magnetic domain microstructures and macroscopic magnetic functions freely across hierarchies by applying the latest mathematical advances in topology and machine learning. This enables the detection of subtle microscopic changes and subsequent prediction of stable/metastable states in advance that was hitherto impossible.  “This super-hierarchical and explanatory analysis would improve the reliability of spintronics devices and our understanding of stochastic/deterministic magnetization reversal phenomena,” says Prof. Kotsugi.

Interestingly, the new algorithm, with its superior explanatory capability, can also be applied to study chaotic phenomenon as the butterfly effect. On the technological front, it could potentially improve the reliability of next-generation magnetic memory writing, and aid the development of new hardware for the next generation of devices.

Reference

DOI: https://doi.org/10.1080/27660400.2022.2149037

Title of original paper: Super-hierarchical and explanatory analysis of magnetization reversal process using topological data analysis

Journal: Science and Technology of Advanced Materials: Methods

Making sense of coercivity in magnetic materials with machine learning

Coercivity is a physical property of magnetic materials that has much importance in the optimization of energy efficiency in various applications, such as electric motors. However, it is difficult to analyze using the currently available theories, since they cannot account for the material’s defects and other types of inhomogeneities. To tackle this, scientists combined data science, materials informatics, and an extension of the Ginzburg–Landau model to explain how coercivity arises from microstructures in magnetic materials.

Soft magnetic materials, i.e., materials that can be easily magnetized and demagnetized, play an essential role in transformers, generators, and motors. The ability of a magnetic material to resist an external magnetic field without changing its magnetization is known as “coercivity,” a property closely linked to the energy loss. In applications such as electric cars, low-coercivity materials are highly desirable to achieve higher energy efficiency.

However, coercivity and other magnetic phenomena associated with energy losses in soft magnetic materials originate from very complex interactions. The usual macroscale analysis suffer from oversimplification of the material’s structure and they often need additional parameters to adjust the theory to the experiment. Thus far, although the tools and frameworks to analyze coercivity are widely available, they mostly do not consider directly the defects and boundaries in the material, which is fundamental to develop new applications.

Against this backdrop, a research team including Prof. Masato Kotsugi from Tokyo University of Science (TUS), Japan, recently developed a new approach to connect the microscale characteristics to a macroscopic physical property, coercivity, using a combination of data science, machine learning, and an extension of the GL model. This study, led by Dr. Alexandre Lira Foggiatto from TUS, was published in Communications Physics on 8 November 2022.

The team aimed to find a way to automate the coercivity analysis of magnetic materials while accounting for their microstructural characteristics. To this end, they first gathered data for both simulated and real magnetic materials in the form of microscopic images of their magnetic domains. The images, after preprocessing, were used as input for a machine learning technique called principal component analysis (PCA), which is commonly used to analyze large datasets. Through PCA, the team condensed the most relevant information (features) in these preprocessed images into a two-dimensional “feature space.”

This approach, combined with others machine learning techniques, such as artificial neural networks, allowed the researchers to visualize a realistic energy landscape of magnetization reversal in the material within the feature space. A careful comparison of the results for experimental and simulated images demonstrated the proposed methodology to be a convenient strategy for mapping the most important features of the material in a meaningful way. “Describing the energy landscape using machine learning showed good results for both experimental and simulated data. Both shared similar shapes as well as similar explanatory variables and correlations between them,” remarks Dr. Foggiatto.

Overall, this study showcases how materials informatics can be cleverly leveraged to not only automate but also clarify the physical origin of coercivity in soft magnetic materials. With any luck, it will help materials scientists and physicists derive new physical laws and models to go beyond the state-of-the-art models and frameworks. Moreover, the applications of this strategy go well beyond coercivity, as Dr. Foggiatto highlights: “Our method can be extended to other systems for analyzing properties such as temperature and strain/stress, as well as the dynamics of high-speed magnetization reversal processes.”

Interestingly, this is the second study Prof. Masato Kotsugi and his colleagues have published in relation to the extended Landau free-energy model they are developing. They hope that, in the near future, their functional analysis models will help achieve high efficiency in electric car motors, paving the way to more sustainable transportation.

The nose-brain pathway: exploring the role of trigeminal nerves in delivering intranasally administered antidepressant

A study of trigeminal nerves reveals how the intranasal administration of the novel glucagon-like peptide-2 can produce anti-depressant effects. 

In a recent study, Japanese scientists have developed a novel concept of a nose-to-brain system for the clinical application of neuropeptides. They developed a derivative of glucagon-like peptide-2 and found that when administered intranasally, it is efficiently delivered through the trigeminal nerve to the site of action and exhibits antidepressant-like effects. This is the first demonstration in the world that intranasally administered neuropeptides reach the brain (hippocampus and hypothalamus) via neurons.

Intranasal (in.) administration has been garnering increasing popularity as a non-invasive approach to deliver drugs directly to the brain. This approach involves the respiratory or olfactory epithelia of the nasal mucosa through which the drugs reach the central nervous system (CNS). Transport from the respiratory epithelium via the trigeminal nerve is considerably slower than transport from the olfactory epithelium route via the olfactory bulb (OB) or cerebrospinal fluid (CSF). However, only a small portion of the nasal mucosa in humans is made up of olfactory epithelium, propelling researchers to focus on improving in. drug delivery time through the predominant respiratory epithelium.

To facilitate this, a team of researchers including Professor Chikamasa Yamashita from Tokyo University of Science, Japan, developed a novel drug to test its uptake efficacy by the CNS.

To offer more insight, Prof. Yamashita states: “In a previous study, we combined functional sequences (namely, a membrane permeability-promoting sequence [CPP] and an endosomal escape-promoting sequence [PAS]) to glucagon-like peptide-2 (GLP-2), which is effective against treatment-resistant depression, so that it can be efficiently taken up by neurons. Using this, we aimed to construct a nose-to-brain system mediated by the trigeminal nerve in the respiratory epithelium”.

 

Intranasal administration of PAS-CPP-GLP-2 results in its delivery to the brain via trigeminal axons of the trigeminal nerves. Source: Tokyo University of Science

 

While studying the uptake of this novel PAS-CPP-GLP-2 by the CNS, the team noted that its anti-depressant effects via in. administration remained on par with intracerebroventricular (icv.) administration at identical doses. Therefore, Prof. Yamashita and his colleagues elucidated a nose-to-brain transfer mechanism to explain why intranasally administered GLP-2 derivatives show drug effects at the same dose as intracerebroventricularly administered GLP-2 derivatives. The team’s findings have been documented in a study made available online on 30 September 2022 in Volume 351 of the Journal of Controlled Release.

The team performed icv. and in. administration of PAS-CPP-GLP-2 into mice. The amount of drug transferred to the whole brain was quantified by enzyme-linked immunosorbent assay (ELISA). Surprisingly, the ELISA revealed that a much smaller amount of intranasally administered PAS-CPP-GLP-2 reached the brain than intracerebroventricularly administered PAS-CPP-GLP-2. However, both icv. and in. administration showed efficacy at the same dose. This is attributed to the fact that icv. administration introduces drugs to the place of origin of CSF (ventricle), causing them to diffuse into the CSF and spread through the brain. Since the CSF is present in the spaces outside the capillaries of the brain, the team saw that a large portion of PAS-CPP-GLP-2 was likely to stay here without being transported to its working sites of action. On the other hand, nasally administered GLP-2 derivatives were rapidly taken up by the trigeminal nerve of the respiratory epithelium, and efficiently reached the site of action while transiting neurons.

Prof. Yamashita explains: “This suggests that the peptide delivered to the site of action by icv. administration is present in large amounts in the brain but only in very small amounts, as it remains in the perivascular space. On the other hand, intranasally administered PAS-CPP-GLP-2, unlike icv. administration, may be transferred to the site of action without passing through the CSF or perivascular space”.

These results prompted the team to identify the central transfer drug delivery route following in. administration. This route involved the principal sensory trigeminal nucleus, followed by the trigeminal lemniscus of the trigeminal nerve, and led to the drug’s working sites. Finally, it was discovered that the migration of PAS-CPP-GLP-2 via nerve transit was the reason behind its pharmacological activity despite its low levels in the brain upon in. administration.

Prof. Yamashita explains, “This is the world’s first drug delivery system that allows intranasally administered peptides to be delivered to the central nervous system via nerve cells, delivering peptides to the site of action with the same efficiency as icv. administration.”

Speaking about the future applications of the team’s findings, Prof. Yamashita concludes: “Current data suggests the possibility of extending the use of this system from treating depression to delivering drugs in patients with Alzheimer’s disease. It is therefore expected to be applied to neurodegenerative diseases with high, unmet medical demand.”

Reference:

DOI: https://doi.org/10.1016/j.jconrel.2022.09.047

Title of original paper: Involvement of trigeminal axons in nose-to-brain delivery of glucagon-like peptide-2 derivative

Journal: Journal of Controlled Release

Trial by wind: testing the heat resistance of carbon fiber-reinforced ultra-high-temperature ceramic matrix composites

Researchers use an arc-wind tunnel to test the heat resistance of carbon fiber-reinforced ultra-high-temperature ceramic matrix composites. 

Carbon fiber-reinforced ultra-high-temperature ceramic (UHTC) matrix composites are extensively used in space shuttles and high-speed vehicles. However, these composites suffer from a lack of oxidation resistance. Recently, researchers from Japan tested the heat resistance of these composites at very high temperatures, providing insight into the modifications needed to prevent UHTC degradation. Their findings could have huge implications for the manufacture of space shuttle orbiters.

Carbon fiber-reinforced carbon (C/C) is a composite material made of carbon fiber reinforced in a matrix of glassy carbon or graphite. It is best known as the material used in hypersonic vehicles and space shuttle orbiters, which cruise at speeds greater than Mach 5. Since the 1970s, it has also been used in the brake system in Formula One racing cars. Even though C/C has excellent mechanical properties at high temperatures and inert atmospheres, it lacks oxidation resistance in these conditions, making its widespread use limited.

Researchers have found that ultra-high-temperature ceramics (UHTCs), which include transition metal carbides and diborides, show good oxidation resistance. In previous studies, zirconium-titanium (Zr-Ti) alloy infiltration has shown promising results for improving the heat resistance of carbon fiber-reinforced UHTC matrix composites (C/UHTCMCs). However, their use at high temperatures (>2000 °C) is not known.

Set against this backdrop, a group of researchers from Japan have evaluated the potential utility of Zr-Ti alloy-infiltrated C/UHTCMCs at temperatures above 2000 °C. Their study, led by Junior Associate Professor Ryo Inoue from Tokyo University of Science (TUS), was published in the Journal of Materials Science and made available online on October 27, 2022. The research team consisted of Mr. Noriatsu Koide and Assistant Professor Yutaro Arai from TUS, Professor Makoto Hasegawa from Yokohama National University, and Dr. Toshiyuki Nishimura from the National Institute for Materials Science.

Speaking of the motivation behind their study, “The research is an extension of the research and development of ceramics and ceramics-based composite materials. In recent years, we have received inquiries from several manufacturers of heavy industries regarding materials that can be used at temperatures above 2000 °C. We have also started to work with these manufacturers to develop new materials,” says Prof. Inoue.

The C/UHTCMC was manufactured using melt infiltration, which is the most cost-effective way to fabricate these materials. To study the applicability of this material, three types of C/UHTCMCs were fabricated with three different alloy compositions. The three alloy compositions used had varying atomic ratios of Zr:Ti. To characterize the heat resistance, the team used a method called arc-wind tunnel testing. This method involves exposing the material to extremely high enthalpy airflow inside a tunnel, similar to conditions that spacecrafts experience while re-entering the atmosphere.

The team found that the amount of Zr in the alloy had a strong effect on the degradation of the composite for all temperatures. This is owing to the thermodynamic preference for the oxidation of Zr-rich carbides compared to Ti-rich carbides. Further, the Zr and Ti oxides formed on the composite surface prevented further oxidation, and the oxide composition depended on the composition of the infiltrated alloys. Thermodynamic analysis revealed that the oxides formed on the composite surface were composed of ZrO2, ZrTiO4, and TiO2 solid solutions.

At temperatures above 2000 °C, the thickness and weight of the samples increased with the Zr content of the composites after the arc-wind tunnel tests. The team also observed that the melting point of the surface oxides increased as the Zr content increased. For temperatures above 2600 °C, the only oxides formed were liquid-phase, requiring a thermodynamic design of the matrix composition to prevent the recession of UHTC composites.

“We have successfully studied the degradation of C/UHTCMC at temperatures above 2000 °C using thermodynamic analysis. We have also shown that the matrix design needs modification to prevent the degradation of the composites. Our research has the potential to contribute to the realization of ultra-high-speed passenger aircraft, re-entry vehicle, and other hypersonic vehicles,” concludes Prof. Inoue.

These results could have important consequences in the production of advanced space shuttle orbiters and high-speed vehicles.

Reference:

DOI: https://doi.org/10.1007/s10853-022-07861-x

Title of original paper: Degradation of carbon fiber-reinforced ultra-high-temperature ceramic matrix composites at extremely high temperature using arc-wind tunnel tests

Journal: Journal of Materials Science

Novel derivative of “love hormone” oxytocin improves cognitive impairment in Alzheimer’s

Alzheimer’s disease (AD), characterized by an accumulation of β-amyloid protein (Aβ) in brain tissue, is a leading cause of dementia. Researchers at Tokyo University of Science have previously reported on the oxytocin-induced reversal of impaired synaptic plasticity triggered by amyloid β peptide (25-35) (Aβ25-35). They now show that an oxytocin derivative with modifications to enhance brain perfusion can reverse Aβ25-35-induced cognitive impairment in mice.

The cognitive decline and memory loss observed in Alzheimer’s disease (AD) is attributed to the accumulation of β-amyloid protein (Aβ), which impairs neural function in the brain. Experimentation has shown that oxytocin, a peptide hormone primarily responsible for parturition, bonding, and lactation, also regulates cognitive behavior in the rodent central nervous system (CNS). This finding, along with the identification of oxytocin receptors in CNS neurons, has spurred interest in the potential role of oxytocin in reversing memory loss tied to cognitive disorders like AD.

However, peptides like oxytocin are characterized by weak blood-brain barrier permeability, and so can only by efficiently delivered to the brain via intracerebroventricular (ICV) administration. ICV, however, is an invasive technique which is impractical to implement clinically.

Delivering peptides to the CNS via intranasal (IN) administration is a viable clinical option. Prof. Chikamasa Yamashita at Tokyo University of Science recently patented a method to increase the efficiency of peptide delivery to the brain, by introducing cell-penetrating peptides (CPPs) and a penetration accelerating sequence (PAS) through structural modifications. Previous work had confirmed that both CPPs and the PAS benefit the nose-to-brain delivery pathway. Now, a group of researchers, led by Prof. Akiyoshi Saitoh and Prof. Jun-Ichiro Oka, leveraged this approach to prepare an oxytocin derivative: PAS-CPPs-oxytocin. Their findings were published online in Neuropsychopharmacology Reports on 19 September 2022.

“We have previously shown that oxytocin reverses amyloid β peptide (25-35) (Aβ25-35)-induced impairment of synaptic plasticity in rodents. We wanted to see if PAS-CPPs-oxytocin could be delivered more efficiently to the mouse brain for clinical application, and if it improved cognitive functional behavior in mice,” states Prof. Oka.

 

The group first developed an Aβ25-35 peptide-induced amnesia model by supplying Aβ25-35 to the mouse brain using ICV delivery. During the course of the study, the spatial working and spatial reference memories of these mice were evaluated using the Y-maze and Morris water maze (MWM) tests. After confirming that memory was affected in Aβ25-35-impaired mice, PAS-CPPs-oxytocin and native oxytocin were administered using the IN and ICV routes respectively, to see if learning and memory improved in the treated mice. Finally, the distribution of the IN-administered oxytocin derivative in brain tissue was profiled by imaging of a fluorescent-tagged oxytocin derivative.

The results of this study were quite promising! The tagged PAS-CPPs-oxytocin showed distribution throughout the mouse brain following its IN administration. While the ICV administration of native oxytocin improved test outcomes in both the Y-maze and MWM tests, the IN administered PAS-CPPs-oxytocin yielded memory improving effects in the Y-maze test. Hailing the team’s discovery, Prof. Oka says, “My team is the first to show that the oxytocin derivative can improve the Aβ25-35-induced memory impairment in mice. This suggests that oxytocin may help reduce the cognitive decline we see in Alzheimer’s disease.”

Why are these findings clinically useful? Prof. Oka explains the broader implications of their work, “The oxytocin derivative enters the brain more efficiently. Furthermore, since IN delivery is a non-invasive procedure, this modified version of the hormone could potentially be a clinically viable treatment for Alzheimer’s disease.”

Perturbing the Bernoulli shift map in binary systems

Researchers effectively tune the parameters of a perturbation method to preserve chaos in the Bernoulli shift map output

The Bernoulli shift map is a well-known chaotic map in chaos theory. For a binary system, however, the output is not chaotic and converges to zero instead. One way to prevent this is by perturbing the state space of the map. In a new study, researchers explore one such perturbation method to obtain non-converging outputs with long periods and analyze these periods using modular arithmetic, obtaining a complete list of parameter values for optimal perturbations.

Is it possible for a deterministic system to be unpredictable? Although counter-intuitive, the answer is yes. Such systems are called “chaotic systems,” which are characterized by sensitive dependence on initial conditions and long-term unpredictability. The behavior of such systems is often described using what is known as a “chaotic map.” Chaotic maps finds applications in areas such as algorithm design, data analysis, and numerical simulations.

One well-known example of a chaotic map is the Bernoulli shift map. In practical applications of the Bernoulli shift map, the outputs are often required to have long periods. Strangely enough, however, when the Bernoulli shift map is implemented in a binary system, such as a digital computer, the output sequence is no longer chaotic and instead converges to zero!

To this end, perturbation methods are an effective strategy where a disturbance is applied to the state of the Bernoulli shift map to prevent its output from converging. However, the choice of parameters for obtaining suitable perturbations lacks a theoretical underpinning.

In a recent study made available online on October 21, 2022 and published in Volume 165, Part 1 of the journal Chaos, Solitons & Fractals on December 2022, Professor Tohru Ikeguchi from the Tokyo University of Science in association with Dr. Noriyoshi Sukegawa from University of Tsukuba, both in Japan, have now addressed this issue, laying the theoretical foundations for effective parameter tuning. “While numerical simulations can tell us which values of the parameters can prevent convergence, there is no theoretical background for choosing these values. In this paper, we aimed to investigate the theoretical support behind this choice,” explains Prof. Ikeguchi.

Accordingly, the researchers made use of modular arithmetic to tune a dominant parameter in the perturbation method. In particular, they defined the best value for the parameter, which depended on the bit length specified in implementations. The team further analyzed the output period for which the parameter had the best value. Their findings showed that the resulting periods came close to the trivial theoretical upper bounds. Based on this, the researchers obtained a complete list of the best parameter values for a successful implementation of the Bernoulli shift map.

Additionally, an interesting consequence of their investigation was its relation to Artin’s conjecture on primitive roots, an open question in number theory. The researchers suggested that, provided Artin’s conjecture were true, their approach would be theoretically guaranteed to be effective for any bit length.

Overall, the theoretical foundations put forth in this research are of paramount importance in the practical applications of chaotic maps in general. “A notable advantage of our approach is that it provides a theoretical support to the choice of best parameters. In addition, our analysis can also be partially applied to other chaotic maps, such as the tent map and the logistic map,” highlights Dr. Sukegawa.

With distinct advantages, such as simplicity and ease of implementation, the Bernoulli shift maps is highly desirable in several practical applications. And, as this study shows, sometimes chaos is preferable to order!

 

 

Novel thin, flexible sensor characterises high-speed airflows on curved surfaces

Inefficient fluid machinery used in the energy and transportation sector are responsible for greenhouse gas emissions and the resulting global warming. To improve efficiency, it is necessary to characterize and reduce flow separation on curved surfaces. To this end, researchers from Japan have now developed a flexible, thin film microelectromechanical system-based airflow sensor that can be utilized to measure complex, three-dimensional flow separation in curved walls for high-speed airflows.

The energy and transportation sector often make use of different kinds of fluid machinery, including pumps, turbines, and aircraft engines, all of which entail a high carbon footprint. This result mainly from inefficiencies in the fluid machinery caused by flow separation around curved surfaces, which are typically quite complex in nature.

To improve the efficiency of fluid machinery, one, therefore, needs to characterize near-wall flow on the curved surface to suppress this flow separation. The challenge in accomplishing this is multifold. First, conventional flow sensors are not flexible enough to fit into the curved walls of fluid machinery. Second, existing flexible sensors suitable for curved surfaces cannot detect the fluid angle (direction of flow). Moreover, these sensors are limited to only detecting flow separation at speeds less than 30 m/s.

In a new study, Prof. Masahiro Motosuke from the Tokyo University of Science (TUS) in Japan and his colleagues, Mr. Koichi Murakami, Mr. Daiki Shiraishi and Dr. Yoshiyasu Ichikawa from TUS, in collaboration with Mitsubishi Heavy Industries, Japan, and Iwate University, Japan, took on this challenge. As Prof. Motosuke states, “Sensing the shear stress and its direction on curved surfaces, where flow separation easily occurs, has been difficult to achieve in particular without using a novel technique.” Their work was published in Volume 13 Issue 8 of Micromachines on 12 August 2022.

The team, in their study, developed a polyimide thin film-based flexible flow sensor that can be easily installed on curved surfaces without disturbing the surrounding airflow, a key requirement for efficient measurement. To enable this, the sensor was based on microelectromechanical system (MEMS) technology. Moreover, the novel design allowed multiple sensors to be integrated for simultaneous measurement of the wall shear stress and flow angle on the surface of the wall.

To measure the shear stress on the walls, the sensor measured the heat loss from a micro-heater, while the flow angle was estimated using an array of six temperature sensors around the heater that facilitated multidirectional measurement. The team conducted numerical simulations of the air flow to optimize the geometry of the heaters and sensor arrays. Using a high-speed airflow tunnel as the testing environment, the team achieved effective flow measurements with wide range of airflow speeds from (30 – 170) m/s. The developed sensor demonstrated both high flexibility and scalability. “The circuits around the sensor can be pulled out using a flexible printed circuit board and installed in a different location, so that only a thin sheet is attached to the measurement target, minimizing the effect on the surrounding flow,” elaborates Prof. Motosuke.

The team estimated the heater output to vary as the one-third power of the wall shear stress, while the sensor output comparing the temperature difference between two oppositely placed sensors demonstrated a peculiar sinusoidal oscillation as the flow angle was changed.

The developed sensor has the potential for a wide range of applications in industrial-scale fluid machinery that often involve complex flow separation around three-dimensional surfaces. Moreover, the working principle used to develop this sensor can be extended beyond high-speed subsonic airflows.

“Although this sensor is designed for fast airflows, we are currently developing sensors that measure liquid flow and can be attached to humans based on the same principle. Such thin and flexible flow sensors can open up many possibilities,” highlights Prof. Motosuke.

Taken together, the novel MEMS sensor could be a game-changer in the development of efficient fluid machineries with reduced detrimental effects on our environment.

***

Reference

DOI: https://doi.org/10.3390/mi13081299

Title of original paper: Development of a Flexible MEMS Sensor for Subsonic Flow

Journal: Micromachines

About The Tokyo University of Science

Tokyo University of Science (TUS) is a well-known and respected university, and the largest science-specialized private research university in Japan, with four campuses in central Tokyo and its suburbs and in Hokkaido. Established in 1881, the university has continually contributed to Japan’s development in science through inculcating the love for science in researchers, technicians, and educators.

With a mission of “Creating science and technology for the harmonious development of nature, human beings, and society”, TUS has undertaken a wide range of research from basic to applied science. TUS has embraced a multidisciplinary approach to research and undertaken intensive study in some of today’s most vital fields. TUS is a meritocracy where the best in science is recognized and nurtured. It is the only private university in Japan that has produced a Nobel Prize winner and the only private university in Asia to produce Nobel Prize winners within the natural sciences field.

Website: https://www.tus.ac.jp/en/mediarelations/

About Professor Masahiro Motosuke from Tokyo University of Science

Masahiro Motosuke is a Professor in the Department of Mechanical Engineering at the Tokyo University of Science (TUS), Japan. He earned his PhD in Engineering from Keio University, Japan, and has held positions at the Japan Society for the Promotion of Science and the Technical University of Denmark. His research into thermofluidics and thermofluidics-based sensors has resulted in multiple journal articles, conference papers and book chapters. Prof. Motsuke has received multiple awards for his research from professional organizations such as the Heat Transfer Society of Japan. For more information, visit: https://www.rs.tus.ac.jp/motlab/en/index.html

Scalable, fully coupled quantum-inspired processor solves optimisation problems

Annealing processors are more energy efficient and quicker at solving mathematical optimization problems than PCs. Researchers at Tokyo University of Science have now developed a new approach to realizing scalable fully coupled annealing processors. These quantum-inspired systems can model the interactions between magnetic spins and use it to solve complex optimization problems. The new method greatly outperforms modern CPUs and shows potential for applications in drug discovery, artificial intelligence, and materials science.

Have you ever been faced with a problem where you had to find an optimal solution out of many possible options, such as finding the quickest route to a certain place, considering both distance and traffic? If so, the problem you were dealing with is what is formally known as a “combinatorial optimization problem.” While mathematically formulated, these problems are common in the real world and spring up across several fields, including logistics, network routing, machine learning, and materials science.

However, large-scale combinatorial optimization problems are very computationally intensive to solve using standard computers, making researchers turn to other approaches. One such approach is based on the “Ising model,” which mathematically represents the magnetic orientation of atoms, or “spins,” in a ferromagnetic material. At high temperatures, these atomic spins are oriented randomly. But as the temperature decreases, the spins line up to reach the minimum energy state where the orientation of each spin depends on its neighbors. It turns out that this process, known as “annealing,” can be used to model combinatorial optimization problems such that the final state of the spins yields the optimal solution.

Researchers have tried creating annealing processors that mimic the behavior of spins using quantum devices, and have attempted to develop semiconductor devices using large-scale integration (LSI) technology aiming to do the same. In particular, Professor Takayuki Kawahara’s research group at Tokyo University of Science (TUS) in Japan has been making important breakthroughs in this particular field.

In 2020, Prof. Kawahara and his colleagues presented at the 2020 international conference, IEEE SAMI 2020, one of the first fully coupled (that is, accounting for all possible spin-spin interactions instead of interactions with only neighboring spins) LSI annealing processors, comprising 512 fully-connected spins. Their work appeared in the journal IEEE Transactions on Circuits and Systems I: Regular Papers. These systems are notoriously hard to implement and upscale owing to the sheer number of connections between spins that needs to be considered. While using multiple fully connected chips in parallel was a potential solution to the scalability problem, this made the required number of interconnections (wires) between chips prohibitively large.

In a recent study published in Microprocessors and Microsystems, Prof. Kawahara and his colleague demonstrated a clever solution to this problem. They developed a new method in which the calculation of the system’s energy state is divided among multiple fully coupled chips first, forming an “array calculator.” A second type of chip, called “control chip,” then collects the results from the rest of the chips and computes the total energy, which is used to update the values of the simulated spins. “The advantage of our approach is that the amount of data transmitted between the chips is extremely small,” explains Prof. Kawahara. “Although its principle is simple, this method allows us to realize a scalable, fully connected LSI system for solving combinatorial optimization problems through simulated annealing.”

The researchers successfully implemented their approach using commercial FPGA chips, which are widely used programmable semiconductor devices. They built a fully connected annealing system with 384 spins and used it to solve several optimization problems, including a 92-node graph coloring problem and a 384-node maximum cut problem. Most importantly, these proof-of-concept experiments showed that the proposed method brings true performance benefits. Compared with a standard modern CPU modeling the same annealing system, the FPGA implementation was 584 faster and 46 times more energy efficient when solving the maximum cut problem.

Now, with this successful demonstration of the operating principle of their method in FPGA, the researchers plan to take it to the next level. “We wish to produce a custom-designed LSI chip to increase the capacity and greatly improve the performance and power efficiency of our method,” Prof. Kawahara remarks. “This will enable us to realize the performance required in the fields of material development and drug discovery, which involve very complex optimization problems.”

Finally, Prof. Kawahara notes that he wishes to promote the implementation of their results to solve real problems in society. His group hopes to engage in joint research with companies and bring their approach to the core of semiconductor design technology, opening doors to the revival of semiconductors in Japan.

Make sure to watch out for these groundbreaking annealing processors in the future!

New nanocomposite films boost heat dissipation in thin electronics

Heat dissipation is essential for maintaining the performance of electronic devices. However, efficient heat dissipation is a major concern for thin-film electronics since conventional heat sinks are bulky. Researchers from Japan found a solution to this problem in sea squirts or ascidians. They prepared flexible nanocomposite films using an ascidian-derived cellulose nanofiber matrix and carbon fiber fillers. The prepared films demonstrate excellent anisotropic in-plane heat conduction and the carbon fiber fillers inside are reusable.

The last few decades have witnessed a tremendous advance in electronics technology, with the development of devices that are thinner, lightweight, flexible, and robust. However, as the devices get thinner so does the space for accommodating the internal working components. This has created an issue of improper heat dissipation in thin-film devices, since conventional heat sink materials are bulky and cannot be integrated into them. Thus, there is a need for thermal diffusion materials that are thin and flexible and can be implemented in thin-film devices for efficient heat dissipation.

Currently, several substrate materials can act as heat diffusers as thin films, but most diffuse heat in the in-plane direction isotropically. This, in turn, could create thermal interference with neighboring components of a device. “For a substrate on which multiple devices are mounted in high density, it is necessary to control the direction of thermal diffusion and find an effective heat removal path while thermally insulating between the devices. The development of substrate films with high anisotropy in in-plane thermal conductivity is, therefore, an important target,” explains Junior Associate Professor Kojiro Uetani from Tokyo University of Science (TUS) in Japan, who researches advanced materials for thermal conductivity and formerly belonged to SANKEN (The Institute of Scientific and Industrial Research), Osaka University.

In a recent study available online on 20 July 2022 and published in Volume 14, Issue 29 of ACS Applied Materials & Interfaces on 27 July 2022, Dr. Uetani and his team, comprising Assistant Professor Shota Tsuneyasu from National Institute of Technology, Oita College, and Prof. Toshifumi Satoh from Tokyo Polytechnic University, both in Japan, reported a newly developed nanocomposite film made of cellulose nanofibers and carbon fiber-fillers that demonstrated excellent in-plane anisotropic thermal conductivity.

Many polymer composites with thermally conductive fillers have been proposed to enhance thermal conductivity. However, there are few reports on materials with particulate or plate-like fillers that exhibit thermal conductivity anisotropy, which is important to prevent thermal interference between adjacent devices. Fibrous fillers such as carbon fibers (CF), on the other hand, can provide in-plane anisotropy in two-dimensional materials due to their structural anisotropy.

It is also important to select matrix with high thermal conductivity. Cellulose nanofibers (CNFs) extracted from the mantle of ascidians has been reported to exhibit higher thermal conductivity (about 2.5 W/mK) than conventional polymers, making it suitable for use as a heat-dissipating material. As indicated by the ability to write with a pencil on paper, cellulose has a high affinity for carbon materials and is easy to combine with CF fillers. For example, hydrophobic CF cannot be dispersed in water by itself, but in the presence of CNF, it is easily dispersed in water. Accordingly, the team chose bio-based ascidian—sea squirt—derived CNFs as the matrix.

For material synthesis, the team prepared an aqueous suspension of CFs and CNFs and then used a technique called liquid 3D patterning. The process resulted in a nanocomposite consisting of a cellulose matrix with uniaxially aligned carbon fibers. To test the thermal conductivity of the films, the team used laser-spot periodic heating radiation thermometry method.

They found that the material showed a high in-plane thermal conductivity anisotropy of 433% along with conductivity of 7.8 W/mK in the aligned direction and 1.8 W/mK in the in-plane orthogonal direction. They also installed a powder electroluminescent (EL) device on a CF/CNF film to demonstrate the effective heat dissipation. In addition, the nanocomposite film could cool two closely placed pseudo heat sources without any thermal interference.

Apart from the excellent thermal properties, another major advantage of the CF/CNF films is their recyclability. The researchers were able to extract the CFs by burning the cellulose matrix, allowing to be reused. Overall, these findings can not only act as a framework for designing 2D films with novel heat dissipating patterns but also encourage sustainability in the process. “The waste that we humans generate has a huge environmental impact. Heat transfer fillers, in particular, are often specialized and expensive materials. As a result, we wanted to create a material that does not go to waste after usage but can be recovered and reused for further applications,” concludes Dr. Uetani.

Indeed, with cooler smartphones and lower waste, it’s a win-win for everyone!

***

Reference

DOI: https://doi.org/10.1021/acsami.2c09332

Title of original paper: Thermal Diffusion Films with In-Plane Anisotropy by Aligning Carbon Fibers in a Cellulose Nanofiber Matrix

Journal: Applied Materials & Interfaces

Authors: Kojiro Uetani1, Kosuke Takahashi2, Rikuya Watanabe3, Shota Tsuneyasu4, and Toshifumi Satoh3

Role of overconfidence, perceived ability in preferences for income equality

Income inequality is at an all-time high worldwide. Now, researchers at the Tokyo University of Science have observed that overconfidence plays an important role in how people view their individual ability to earn. They found that overconfident people’s realization of the gap between their perceived ability and their income lowers their faith in the economy being fair and meritocratic. However, this does not translate into higher support for reducing income inequality.

Overconfidence in one’s ability is not uncommon among humans. It can be observed in areas ranging from driving ability and productivity to calculating returns on investment projects. Overconfidence can also lead people to think that they aren’t earning as much as they think they can. This consideration should encourage overconfident people to think that society is unfair. Furthermore, this effect should increase the support for more concentrated efforts, including government interventions, to reduce the income inequality and mitigate the perceived unfairness of society. However, is this really the case?

A new study by researchers from Tokyo University of Science and Princeton University seeks to answer this question. The research team, which included Junior Associate Professors Tomoko Matsumoto and Daiki Kishishita from Tokyo University of Science and Atsushi Yamagishi from Princeton University, aimed to find out how the preferences of overconfident people, specifically those concerning income inequality, change when they are made aware of a gap between their economic status and their self-evaluated ability. The study was made available online in the European Journal of Political Economy on 28 August, 2022.

“There is a large variation in the level of inequality in countries with similar levels of income redistribution, in terms of the degree to which people support or oppose income redistribution. We are interested in understanding why those who benefit economically from the implementation of income redistribution policies oppose such policies, and have focused on the nature of the ‘self-confidence overload’,” explains Dr. Matsumoto, explaining the rationale for their study.

To this end, the researchers conducted an online survey in the United States with 4,471 participants. The survey was framed in such a way that the questions reinforced a participant’s self-perceived income-ability gap randomly. The novelty of the study stems from the fact that previous studies have been lab-controlled experiments. However, this study tests the presented theory in a real economic environment using the actual income values of the participants.

The study yielded a number of surprising results. The researchers found that participants who stated that their income was lower than their ability to earn lose their confidence in meritocracy and their faith in the economy being fair. They view the economy and society as being unfair, which hinders them to earn to their full potential. The researchers also noted that people believed that negative income-ability gap was a result of an unfair economy and not an individual responsibility.

Upon realizing the negative income-ability gap, more left-wing participants were in favor of reducing income inequality than right-wing and centrist participants. However, people across the political spectrum did not favor government intervention as a way to reduce income inequality. Government intervention did not garner a lot of support even among left-wing participants with high trust in the government. Explaining this anomaly, Dr. Matsumoto says, “Scholars have previously argued that characteristics such as party ideology or family and personal values are major determinants of preferences for redistribution and changing a belief about social and economic environments may have a limited role. Their limited effect on preferences for reducing income inequality may stem from a similar mechanism.”

Interestingly, it was noted that people following a right-wing ideology showed higher support for ensuring that people get paid according to their ability than government intervention.

The researchers believe their findings would be relevant in countries apart from the United States, as overconfidence in one’s ability is prevalent across the world. However, they anticipate differences based on the population’s belief in the state of their economy. Addressing the implications of their findings, Dr. Matsumoto says, “I believe that identifying who is for and who is against reducing inequality will help to alleviate social conflicts in a society where inequality is growing and polarization is increasing.”

***

Reference

DOI: https://doi.org/10.1016/j.ejpoleco.2022.102279

Title of original paper: Overconfidence, Income-ability Gap, and Preferences for Income Equality

Journal: European Journal of Political Economy

[email protected]