Could a paper transistor offer an alternative to silicon

first_img( — As technology advances, scientists look for ways to enhance electronic applications and devices. Indeed, electronics are getting smaller and more diverse. And as this happens, there is an increased requirement for flexibility in transistors, which make the electronic devices we desire work. Unfortunately, silicon and polymers may not fulfill the requirements needed to advance on to the transistors of the future. “The problem with silicon is that it is toxic and brittle,” Jaehwan Kim tells “An increase in transistors from polymers can solve the problem of brittleness, but many of these polymers are also toxic for humans, and they can also produce a lot of pollution in their manufacture.” Kim is a scientist at INHA University in South Korea. Along with Sungryul Yun, Sang-Dong Jang, Gyu-Young Yun and Joo-Hyung Kim, Kim has been studying a way to develop a transistor that is more environmentally friendly and fulfills the requirements of flexibility and usability in the advancement of electronic devices. “What we have found,” he says, “is that it is possible to make a transistor out of a special kind of cellulose paper.” The results of the team’s efforts are available in Applied Physics Letters: “Paper transistor made with covalently bonded multiwalled carbon nanotube and cellulose.”“This cellulose paper is flexible and more environmentally friendly,” Kim explains. “We modified the cellulose paper so that it has the properties of a transistor. We added carbon nanontube to improve the electrical property of the cellulose, since a transistor should be a semiconductor. We fabricated this transistor, tested it, and found that it worked.”The South Korean team had to deposit electrodes on the top and the bottom of the transistor in order to produce the proper electric field. “This is a very unique feature,” Kim points out. “This is quite challenging technology, putting electrodes and wires on this paper, and using nanotubes as part of the transistor. You can see why there are challenges ahead to fully implementing this.”Even though this is a good first step, Kim realizes that there is much yet to be done before mass production of this type of transistor can move forward. “First of all, we have to fully understand why this material offers such an interesting phenomenon. We will also need to improve its performance. While it works, the transistor could have better performance, and we will need to work on enhancing it.”He continues: “We need to study the mechanics of the paper, and figure out how it can be mass-produced. Our lab can’t start mass production, and we will have to develop a system that can capture the unique process required to make these transistors.”However, Kim is hopeful that answers can be found. “We have been working on this for about six years, and are pleased with the progress made so far. While this technology won’t be made fully available to us immediately, we are still making the first steps to having transistors that are flexible, biocompatible and more sustainable for the environment.”More information:• Visit CRI EAPap from INHA University.• Sungryul Yun, et. al., “Paper transistor made with covalently bonded multiwalled carbon nanotube and cellulose,” Applied Physics Letters (2009). Available online: Copyright 2009 All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of Explore further Citation: Could a paper transistor offer an alternative to silicon? (2009, September 22) retrieved 18 August 2019 from New Flexible, Transparent Transistors made of Nanotubes This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Ereader Roundup At The 2010 CES

first_img( — At the 2010 Las Vegas CES, many manufactures introduced their e-reader products in the hope to spark consumer interest in the e-book market. 2010 is going to prove to be an innovative year for e-book readers as color technology is going to play an important part in e-book readers this year. To sum up we can see that in 2010 the e-reader market is going to extend beyond basic e-books and include newspapers and magazines augmented with audio and full-color animations, video, and imagery. This will force manufactures like Amazon (Kindle) and Sony (Reader) to go beyond the monochrome E-Ink devices they have today and produce e-readers that will be competitive with the new technology of today and beyond. • Iriver Story e-reader measures 0.36-inch thick and incorporates a 6-inch e-ink display, an integrated MP3 player, 2GB of internal memory, an SD expansion slot, USB 2.0 connectivity and is WiFi enabled. • Jinke SiPix panel e-reader uses SiPix panels for the A6 and A9 readers. Both the 6 and 9 inch devices have 16 levels of grayscale, WiFi a/b/g, and optional 3G. Supports formats FB2, EPUB, PDF, most image formats, and MP3. The 6-inch (600 x 800) device has 2GB of storage, an SD slot, and an accelerometer. The 9-inch (1024 x 768) device has up to 4GB storage. The A6 retails for $275 and the A9 for $330; both should be available in March. Skiff unveils e-reader for newspapers, magazines Two of the most impressive electronic ink devices are the 10.5-inch Que proReader by Plastic Logic and 11.5 inch Skiff Reader; both touch screen devices are 3G enabled. The Que proReader is marketing their device as a replacement to bundles of business papers and support for truVue PDF files, e mail, MS Office docs, and Outlook calendar support. The Skiff Reader is targeting consumers with published content (books and publications) and multi-media.The 2010 CES was littered with hybrids and new screen technology looking to be more competitive with Amazon’s Kindle and Barnes & Noble’s Nook. With any luck the competition should heat up later this year and drive down the price of first generation e-readers.Here’s a summary of a few e-readers shown at the 2010 Las Vegas CES.• Bookeen Orizon touchscreen e-reader is equipped with a 6-inch touchscreen display, built-in WiFi, Bluetooth, ePub support, and an accelerometer for portrait or landscape reading; will retail for about $250. No release date yet. © 2009 • Samsung E6 and E10 e-book readers – Comes in 6 and 10 inch touchscreen model and will use Google has the content provider. Both models have a QWERTY keyboard and wireless but no 3G. Both the E6 and E10 feature on-screen handwriting capabilities, Bluetooth 2.0, and 802.11b/g WiFi. The 6-inch model will retail for$399, while the 10 inch will sell for $699. Both will be available in early 2010.center_img Explore further • Blio e-reader software will support PC and Mac. The Blio software lets you read digital content in a whole new way. Bilo software will also preserve the traditional book or magazine format by keeping its layout, fonts, and images while also letting you experience digital interactivity. More information: For additional information on the 2010 CES e-reader review, visit: … r-story-of-ces-2010/ • Interead COOL-ER e-readers – 3G enabled (AT&T) and WiFi capable. Bandwidth deals with AT&T will support NewspaperDirect service with access to over 1,300 newspapers and magazines. Scheduled to be released mid-2010, no retail price is available yet. Citation: E-reader Roundup At The 2010 CES (2010, January 12) retrieved 18 August 2019 from This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

KUKA makes a robot that knows what it is picking up w

first_img( — Making a robot that can pick things up is not really a challenge anymore. Provided you calibrate your force sensors correctly the task is fairly simple. Making a robot that knows what it is picking up is another thing all together. Explore further © 2010 More information: Citation: KUKA makes a robot that knows what it is picking up (w/ video) (2011, June 20) retrieved 18 August 2019 from Apparently, Germany-based KUKA Robotics, is working on a bot that can do just that. Far from being humanoid the recently updated LWR +4, resembles a tall orange worm. The LWR +4 is a robot arm that has a set of visual sensors that allows the machine to analyze the substance in front of it and analyze its contents. The system makes use of a high performance camera along with vector fields and inverse kinematics to ensure that the correct item is selected every time. The robot has a total of 7 axes and weight of 16 kg. The robot can carry a payload of up to 7 kg. The robot is also made to work with assembly tasks that require a high degree of precision, with a human link movement the system give the design a capability to work in a controlled movements with the user interface. The LWR +4 is, of course, not the only machine that the KUKA Robotics Company has created. With the help of their partner companies, which include Jantz Canada, CertoTech and Programmable Control Systems, KUKA will be showing off the bot at the PACKEX event. The company will also be holding a seminar at the event on June 22 from 3:10 to 3:30 pm. Care-O-bot 3: Always at your service This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Research team devises a means for measuring quantum tunneling time

first_img More information: Resolving the time when an electron exits a tunnelling barrier, Nature 485, 343–346 (17 May 2012) doi:10.1038/nature11025AbstractThe tunnelling of a particle through a barrier is one of the most fundamental and ubiquitous quantum processes. When induced by an intense laser field, electron tunnelling from atoms and molecules initiates a broad range of phenomena such as the generation of attosecond pulses1, laser-induced electron diffraction and holography. These processes evolve on the attosecond timescale (1 attosecond ≡ 1 as = 10−18 seconds) and are well suited to the investigation of a general issue much debated since the early days of quantum mechanics—the link between the tunnelling of an electron through a barrier and its dynamics outside the barrier. Previous experiments have measured tunnelling rates with attosecond time resolution and tunnelling delay times. Here we study laser-induced tunnelling by using a weak probe field to steer the tunnelled electron in the lateral direction and then monitor the effect on the attosecond light bursts emitted when the liberated electron re-encounters the parent ion. We show that this approach allows us to measure the time at which the electron exits from the tunnelling barrier. We demonstrate the high sensitivity of the measurement by detecting subtle delays in ionization times from two orbitals of a carbon dioxide molecule. Measurement of the tunnelling process is essential for all attosecond experiments where strong-field ionization initiates ultrafast dynamics10. Our approach provides a general tool for time-resolving multi-electron rearrangements in atoms and molecules—one of the key challenges in ultrafast science. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Quantum tunneling is where particles are able to move through a barrier even though they lack sufficient energy to do so. It’s also a process that happens at such speed that it’s been almost impossible to measure. Making things even more difficult is the fact that measurement of the speed of tunneling is described differently depending on whether it is inside or outside of its barrier. Outside, the speed at which a particle moves is described by normal Newtonian physics. Inside of its barrier however, it’s described by a complex number, which is calculated by combining a real and imaginary number. Making things even murkier is the fact that particles such as electrons don’t exist in a single state, but rather as a quantum wave, which is described by a wavefunction.To calculate the tunneling time of an electron escaping the barrier imposed by the interaction between it and the nucleus of its Helium atom, the research team focused a strong laser field on the electrons of a single Helium atom, causing the barrier holding them in place to lower just enough to allow them to escape the barrier by tunneling. The team then used a second less powerful laser field to push the electrons back to the ion that had been created, causing them to reconnect and return the barrier to a normal state. When each did so, a single photon with higher energy than the initial laser field was released that could be seen by the team. As a result, they were able to measure the time it took for the whole process to occur (just attoseconds) and found that it matched quantum theory, proving that they’d succeeded in their efforts.The team also conducted another experiment where they forced electrons from carbon dioxide molecules and compared the times for ionization between those that were taken from a highest energy level and those from two levels that were deeper, which takes more energy. They found that the difference between the two (time to tunnel) was close to 40 attoseconds.Taken together, the results obtained by the team demonstrate that tunneling can not only be timed, but it can be done in multiple ways. Explore further Schematic description of the two-colour gates. Image (c) Nature 485, 343-346 (17 May 2012) doi:10.1038/nature11025 ( — In a bit of inspired research, a diverse team of researchers has devised a means for measuring the time it takes for an electron to tunnel through a barrier. Led by Israel’s Weizmann Institute of Science, Dror Shafir, the team as they describe in their paper published in the journal Nature used one laser to lower a barrier allowing an electron to escape via tunneling from its Helium atom, and another to prod it back again and in the process were able to measure the time it took to do so.center_img Journal information: Nature Citation: Research team devises a means for measuring quantum tunneling time (2012, May 18) retrieved 18 August 2019 from The secrets of tunneling through energy barriers © 2012 Phys.Orglast_img read more

Robot hand wins at rock paper scissors every time w Video

first_img This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. ( — What do you call a robot hand that wins at rock, paper, scissors every time? Some would say a cheater, but others more in the know would call it the Janken robot, built by Japanese researchers from the University of Tokyo. They’ve built a robot hand that when combined with a camera and tracking software is able to beat people at their own game every time it plays. But only because it cheats. Explore further Bird-like robot perches on a human hand (w/ Video) When two people resort to using rock, paper, scissors to resolve a conundrum, they first ball their hands into a fist, then pump two times before throwing down their choice: rock (a fist), paper (open flat hand) or scissors (two fingers simulating a pair of scissors). Because there is no one hand formation that can always beat the other two, the game is considered one of chance, though many insist there is a definite psychological twist as both participants attempt to guess which formation the other is going to use, and then make their decisions based on that. More information: © 2012 Phys.Org Citation: Robot hand wins at rock, paper, scissors every time (w/ Video) (2012, June 28) retrieved 18 August 2019 from With robots though, there’s no trying to psyche someone out; instead it’s, as always, about brute force, or in this case, speed. The Janken (the Japanese name for rock, paper, scissors) robot has a camera attached to it that feeds it information about what is going on with the hand of the opponent. Software running on the attached computer is able to discern which part of the picture is a human hand and then orders the tracking part of the system to follow its movements. Thus, the robot hand closely watches the human hand as it does its fist pumping, and then as it goes for the throwdown. As soon as it recognizes which gesture the hand is forming, its software calculates a winning gesture and orders the hand to throw it down, all so quickly that to us mere humans, it appears as if the robot is able to guess which gesture the human is going to use, every single time.When two people play, if one tries to hold back on their throwdown to figure out what gesture the other is going to play before throwing down their own, anyone watching can see what’s going on and the person is labeled a cheat. When a robot does it though, is it really cheating? Because cheating is a human construct after all, and implies a degree of deception. The robot hand isn’t trying to deceive anyone, it’s just doing what it’s been programmed to do by human beings, which suggests that it’s still people who are doing the cheating after all, albeit in a much more advanced way.Thus a robotics experiment meant to advance the science by mastering a simple game played around the world, has evolved into a philosophical debate regarding not just the nature of man, but how robots might fit into a future where both will likely be expected to coexist in a peaceful and productive way.last_img read more

British scientists offer explanations on global warming pause

first_img( —A team of climate experts from Britain’s national weather service (The Met Office) has given a series of presentations at the Science Media Centre in London with the aim of trying to explain why global warming has flattened over the past decade. Journalists were invited to listen as climatologists explained theories that have been developed to describe the current “pause” in global temperature increases the planet has been experiencing. Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. © 2013 Scientists around the world have noted that despite increasing amounts of carbon being pumped into the atmosphere, average global temperatures have leveled off since the late 1990’s. The main theory to explain why this has occurred, members of the team explained, centers around the world’s oceans. Researchers studying the temperature of the oceans have found that surface temperatures increased as expected—what’s new is an increase in water temperature at much greater depths. The ocean is acting as a giant heat sink, they say, absorbing much of the heat that would otherwise be found in the atmosphere. They back this up by noting that satellites that measure the amount of heat that arrives and leaves our planet indicate that heat retained by the planet continues to rise, even as atmospheric temperatures have leveled off. That heat, the scientists said, has to be going somewhere, and since it’s not likely being absorbed by dry land areas, that leaves the sea. They acknowledged that no one really knows what impact rising deep sea temperatures might have on the planet.Another possible explanation the team said was that the sun has temporarily been putting out less heat than normal—not necessarily enough to explain a leveling off of global warming, but enough to cause a slight perturbation. They noted that volcanic eruptions spewing particulates into the atmosphere (reflecting heat back into space) have also worked to stabilize rising temperatures.The team also pointed out that the pause in global warming is almost certainly temporary and that the consensus among world climatologists is that temperatures will once again begin to rise, likely sooner than later. They insisted that earlier projections of an average global rise in temperature of 2°C by the end of this century are still correct, insinuating that skeptics should not take the leveling off of temperatures as a sign that climatologists have been wrong. Periodic flattening of rising temperatures, they noted, have always been in the projection models. Citation: British scientists offer explanations on global warming pause (2013, July 23) retrieved 18 August 2019 from Past decade saw unprecedented warming in the deep oceanlast_img read more

Dynamics of genetic admixture in Brazilian populations

first_img Citation: Dynamics of genetic admixture in Brazilian populations (2015, July 8) retrieved 18 August 2019 from (—Human genomic diversity studies provide a window to population movements across regions and societies throughout history. Generally, South America has been underrepresented in such studies, but recognizing that Brazil provides a classical model of population admixture, an international group of researchers recently conducted a population-based, genome-wide analysis of three Brazilian populations. More information: “Origin and dynamics of admixture in Brazilians and its effect on the pattern of deleterious mutations.” PNAS 2015 ; published ahead of print June 29, 2015, DOI: 10.1073/pnas.1504447112 This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Journal information: Proceedings of the National Academy of Sciences 23andMe study sketches genetic portrait of the UScenter_img Credit: Marcelo Calvet/Wikipedia © 2015 Their study, the EPIGEN Brazil Initiative, represents the most comprehensive genomic analysis of any South American population to date. They have published their results in the Proceedings of the National Academy of Sciences. The population of Brazil consists of the post-Columbian admixture between Amerindians, European colonizers and immigrants, and African slaves. The authors note that Brazil was the destiny of 40 percent of the African diaspora that characterized the period of the slave trade. Brazil received seven times more slaves than the United States during that period.From 6,487 admixed Brazilian individuals, the researchers genotyped nearly 2.2 million single nucleotide polymorphisms (SNPs, colloquially called “snips”), which are DNA sequence variations that occur commonly within populations. They studied three population-based cohorts from different regions with distinct socioeconomic backgrounds.This population-based approach allowed the researchers to identify and quantify ancestry SNP components of three representative Brazilian populations for the first time; they developed an approximate Bayesian analysis to infer properties of population admixture; they identified how genetic structure was influenced by ancestry-related social history; and they were able to study the interactions of admixture, kinship, and inbreeding on patterns of deleterious genetic mutations.The populations in question were from Salvador, a coastal city with 2.7 million inhabitants; Bambuí, a city with a population of around 15,000; and Pelotas, a city of 214,000 inhabitants. Across these three regions, the researchers traced a historical pattern of sex-biased preferential mating between men with predominant European ancestry and women with predominant African or Amerindian ancestry.Families from Salvador and Pelotas, the cities with the largest populations, had the lowest rates of consanguinity. By contrast, Bambuí, consisting of the smallest population in the study, had the highest family structure and the most inbreeding, which was correlated with European ancestry. Genomic ancestry in Brazil correlates with a set of phenotypes such as self-reported ethnicity, skin color, and social aspects such as socioeconomic status. The authors write, “… after five centuries of admixture, Brazilians still preferentially mate with individuals with similar ancestry (and its correlated morphological phenotypes and socioeconomic characteristics), a trend also observed in Mexicans and Puerto Ricans.” This was particularly true in the cities of Pelotas and Bambuí, which have higher proportions of individuals with markedly predominant ancestries. In Salvador, however, the population is far more admixed, likely due to a combination of factors including a longer history of admixture and the relatively homogenous socioeconomic status of the inhabitants, according to the authors.The authors investigated how European ancestry shapes the amount of deleterious genetic variants in admixed individuals across domains of heterozygosis and homozygosis. They report that in Latin-American populations, the history of continental admixture comprises the main determinant of the presence of deleterious variants, but in a much more complex way than they expected, and likely unrelated to local demographic history.They suggest that future studies on Northern Brazilian populations or those in the Central-West might reveal larger dynamics of Amerindian ancestry. They also speculate that studies of large urban centers that historically serve as destinations for immigration might reveal the influence and impacts of other global ancestry components. Explore furtherlast_img read more

Accelerating light beams in curved space

first_img Journal information: Physical Review X Physicists turn to Maxwell’s equations for self-bending light More information: Anatoly Patsyk et al. “Observation of Accelerating Wave Packets in Curved Space.” Physical Review X. DOI: 10.1103/PhysRevX.8.011001 (a) Experimental setup, (b) propagation of the green beam inside of the red shell of an incandescent light bulb, and (c) photograph of the lobes of the accelerating beam. Credit: Patsyk et al. ©2018 American Physical Society The accelerating light beam propagates on a nongeodesic trajectory, rather than the geodesic trajectory taken by a non-accelerating beam. Credit: Patsyk et al. ©2018 American Physical Society By shining a laser along the inside shell of an incandescent light bulb, physicists have performed the first experimental demonstration of an accelerating light beam in curved space. Rather than moving along a geodesic trajectory (the shortest path on a curved surface), the accelerating beam bends away from the geodesic trajectory as a result of its acceleration. © 2018 The ability to accelerate light beams along curved surfaces has a variety of potential applications, one of which is emulating general relativity phenomena. “Einstein’s equations of general relativity determine, among other issues, the evolution of electromagnetic waves in curved space,” Patsyk said. “It turns out that the evolution of electromagnetic waves in curved space according to Einstein’s equations is equivalent to the propagation of electromagnetic waves in a material medium described by the electric and magnetic susceptibilities that are allowed to vary in space. This is the foundation of emulating numerous phenomena known from general relativity by the electromagnetic waves propagating in a material medium, giving rise to the emulating effects such as gravitational lensing and Einstein’s rings, gravitational blue shift or red shift, which we have studied in the past, and much more.”The results could also offer a new technique for controlling nanoparticles in blood vessels, microchannels, and other curved settings. Accelerating plasmonic beams (which are made of plasma oscillations instead of light) could potentially be used to transfer power from one area to another on a curved surface. The researchers plan to further explore these possibilities and others in the future.”We are now investigating the propagation of light within the thinnest curved membranes possible—soap bubbles of molecular thickness,” Patsyk said. “We are also studying linear and nonlinear wave phenomena, where the laser beam affects the thickness of the membrane and in return the membrane affects the light beam propagating within it.” Whereas the trajectory of an accelerating beam on a flat surface is determined entirely by the beam width, the new study shows that the trajectory of an accelerating beam on a spherical surface is determined by both the beam width and the curvature of the surface. As a result, an accelerating beam may change its trajectory, as well as periodically focus and defocus, due to the curvature. Previously, accelerating light beams have been demonstrated on flat surfaces, on which their acceleration causes them to follow curved trajectories rather than straight lines. Extending accelerating beams to curved surfaces opens the doors to additional possibilities, such as emulating general relativity phenomena (for example, gravitational lensing) with optical devices in the lab.The physicists, Anatoly Patsyk, Miguel A. Bandres, and Mordechai Segev at the Technion – Israel Institute of Technology, along with Rivka Bekenstein at Harvard University and the Harvard-Smithsonian Center for Astrophysics, have published a paper on the accelerating light beams in curved space in a recent issue of Physical Review X.”This work opens the doors to a new avenue of study in the field of accelerating beams,” Patsyk told “Thus far, accelerating beams were studied only in a medium with a flat geometry, such as flat free space or slab waveguides. In the current work, optical beams follow curved trajectories in a curved medium.”In their experiments, the researchers first transformed an ordinary laser beam into an accelerating one by reflecting the laser beam off of a spatial light modulator. As the scientists explain, this imprints a specific wavefront upon the beam. The resulting beam is both accelerating and shape-preserving, meaning it doesn’t spread out as it propagates in a curved medium, like ordinary light beams would do. The accelerating light beam is then launched into the shell of an incandescent light bulb, which was painted to scatter light and make the propagation of the beam visible.When moving along the inside of the light bulb, the accelerating beam follows a trajectory that deviates from the geodesic line. For comparison, the researchers also launched a nonaccelerating beam inside the light bulb shell, and observed that that beam follows the geodesic line. By measuring the difference between these two trajectories, the researchers could determine the acceleration of the accelerating beam. Citation: Accelerating light beams in curved space (2018, January 12) retrieved 18 August 2019 from Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Astronomers conduct detailed chemical analysis of eleven globular clusters

first_img More information: Detailed abundance analysis of globular clusters in the Local Group: NGC 147, NGC 6822, and Messier 33, arXiv:1801.03140 [astro-ph.GA] present new abundance measurements for eleven GCs in the Local Group galaxies NGC 147, NGC 6822, and Messier 33. These are combined with previously published observations of four GCs in the Fornax and WLM galaxies. The abundances were determined from analysis of integrated-light spectra, obtained with HIRES on the Keck I telescope and with UVES on the VLT. We find that the clusters with [Fe/H]<-1.5 are all alpha-enhanced at about the same level as Milky Way GCs. Their Na abundances are also generally enhanced relative to Milky Way halo stars, suggesting that these extragalactic GCs resemble their Milky Way counterparts in containing significant fractions of Na-rich stars. For [Fe/H]>-1.5, the GCs in M33 are also alpha-enhanced, while the GCs that belong to dwarfs (NGC 6822 SC7 and Fornax 4) have closer to Solar-scaled alpha-element abundances, thus mimicking the abundance trends observed in field stars in nearby dwarf galaxies. The abundance patterns in SC7 are remarkably similar to those in the Galactic GC Ruprecht 106, including significantly sub-solar [Na/Fe] and [Ni/Fe] ratios. In NGC 147, the GCs with [Fe/H]<-2.0 account for about 6% of the total luminosity of stars in the same metallicity range, a lower fraction than those previously found in the Fornax and WLM galaxies, but substantially higher than in the Milky Way halo. Globular clusters are spheroidal collections of tightly bound stars orbiting galaxies. For astronomers, they are natural laboratories that enable studies on stellar and chemical evolution. Therefore, detailed abundance analyses of globular clusters could help us answer many fundamental questions in astrophysics.With that aim in mind, a team of astronomers led by Soeren S. Larsen of the Radboud University in Nijmegen, the Netherlands, has analyzed spectra of globular clusters in the galaxies NGC 147, NGC 6822, and Messier 33, all located in the Local Group. The spectroscopic data were obtained with the HIRES (High Resolution Echelle Spectrometer) spectrograph on the Keck I telescope in Hawaii and with the UVES (UV-Visual Echelle Spectrograph) on the Very Large Telescope (VLT) in Chile.The analysis allowed the researchers to determine detailed chemical abundances of eleven globular clusters."We have presented new integrated-light measurements of chemical abundances for 11 globular clusters in NGC 147, NGC 6822, and Messier 33," the astronomers wrote in the paper.In general, the researchers found that globular clusters in dwarf galaxies like NGC 147 tend to be relatively metal-poor, compared both with their counterparts in the halo of the Milky Way galaxy and with the field stars in the respective galaxies. However, they emphasized that no globular clusters with a metallicity [Fe/H] below −2.5 have yet been found, either in the Milky Way or among the clusters they have observed so far.The study reveals that the stellar abundance ratios of alpha-elements to iron behave differently as a function of metallicity in the dwarf galaxies and Messier 33. The analysis conducted by Larsen's team indicates that the metal-poor clusters described in the paper are alpha-enhanced at about the same level as globular clusters in the Milky Way. However, the researchers note that while the more metal-rich globular clusters in the dwarf galaxies have ratios of alpha-elements to iron similar to that of the sun, those in Messier 33 remain alpha-enhanced.Moreover, the astronomers found that alpha-elements in the globular clusters in Messier 33 follow patterns similar to those seen in the globular clusters in our galaxy. This finding allowed the authors of the paper to assume that the Messier 33 halo underwent relatively rapid chemical enrichment, dominated by Type II Supernova nucleosynthesis.In concluding remarks, the researchers noted that at low metallicities, the abundance patterns suggest that globular clusters in the Milky Way, dwarf galaxies, and Messier 33 experienced similar enrichment histories or processes. Furthermore, they emphasize that at higher metallicities, the lower levels of alpha-enhancement in the globular clusters in dwarf galaxies resemble the abundance patterns observed in field stars in nearby dwarfs. © 2018 Metallicity distributions of field stars (Ho et al. 2014) and GCs in NGC 147. For the GCs, the bars have been scaled according to the V-band luminosity of each cluster. Solid bars: this work. Dashed bars: Veljanoski et al. (2013). Credit: Larsen et al., 2018. Researchers conduct chemical study of an old, metal-rich globular cluster Explore further Astronomers have performed abundance measurements for 11 globular clusters in the galaxies NGC 147, NGC 6822, and Messier 33. The new study, presented January 9 in a paper published on, could improve our knowledge about chemical composition of stellar populations in the universe. Citation: Astronomers conduct detailed chemical analysis of eleven globular clusters (2018, January 16) retrieved 18 August 2019 from This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Biofunctionalized ceramics for cranial bone defect repair – in vivo study

first_img © 2019 Science X Network Advances in materials science and production technology have enabled bone tissue engineering (BTE) strategies that generate complex scaffolds with controlled architecture for bone repair. The novel biomaterials can be further functionalized with bioactive molecules for biocompatibility by enhancing osteoinductivity (induce osteogenesis to initiate bone healing). In a recent study published in Multifunctional Materials, IOP Science, Arun Kumar Teotia and co-workers at the Departments of bioengineering, orthopedics, chemical engineering and biomedical engineering, in India, Finland and Sweden developed a novel, multifunctional, bilayered composite scaffold (BCS). The novel material contained ceramic nano-cement (NC) and the macroporous composite scaffold (CG) to mimic bone architecture during bone repair. To functionalize the scaffolds, the materials scientists added recombinant human bone morphogenetic protein-2 (rhBMP-2) (BMP) and zoledronic acid (ZA). The scientists proposed that the composite scaffolds would support the proliferation of osteoblast progenitor cells, alongside the controlled release of loaded bioactive molecules to induce bone regeneration. Scientists in the same research team had previously developed a similar multifunctional material to test its initial impact during an in vivo pilot study. In the present study, Teotia et al. observed a higher amount of mineralized tissue (MT) with functionalized scaffolds within 12 weeks of in vivo implantation in a larger group of rats with 8.5 mm critical cranial defects. The combined bilayered composite scaffolds (BCS) functionalized with zoledronic acid (ZA) (to form BCS+ZA) contained the highest MT deposition (13.9 mm3). Followed by the macroporous composite scaffold (CG) functionalized with BMP and ZA (CG+BMP+ZA) at 9.2 mm3 and BCS+ZA+BMP with 7.6 mm3 of MT deposition. The MT values recorded in the study during bone regeneration were significantly higher than osteogenesis rates on the non-functionalized CG or BCS scaffolds alone (without bioactive molecules). The results supported the BTE strategies developed in the study to form an osteo-promotive multifunctional scaffold that could be implanted in vivo to repair critical defects. A unique feature of bone tissue is its capacity to heal without scar formation as a highly dynamic tissue with substantial potential for regeneration. Natural bone formation occurs either via endochondral ossification within tubular bones (e.g. phalanges, femur) or during cartilage deposition, followed by ossification. In a third process, intramembranous direct ossification can occur in flat bones (skull, pelvis) without cartilage formation. Regeneration is a slow process in flat bones (skull, pelvis) due to limited mesenchymal stem cells (MSCs), requiring major cell recruitment from the periosteum or dura. Journal information: ACS Applied Materials and Interfaces Citation: Biofunctionalized ceramics for cranial bone defect repair – in vivo study (2019, February 28) retrieved 18 August 2019 from This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Bioplotting bone-mimetic 3-D tissue scaffolds with osteogenic effectscenter_img Illustration of the multifunctional materials experimentally developed in the study for in vivo applications of cell proliferation and cranioplasty. Defect healing in a rodent model is observed after 12 weeks of scaffold implantation at the site of defect with cell proliferation, radiography, micro-CT and histology analyses. Image credit: ACS Applied Materials & Interfaces. Credit: Multifunctional Materials, doi: More information: Arun Kumar Teotia et al. Composite bilayered scaffolds with bio-functionalized ceramics for cranial bone defects: An in vivo evaluation, Multifunctional Materials (2019). DOI: 10.1088/2399-7532/aafc5bArun Kumar Teotia et al. Nano-Hydroxyapatite Bone Substitute Functionalized with Bone Active Molecules for Enhanced Cranial Bone Regeneration, ACS Applied Materials & Interfaces (2017). DOI: 10.1021/acsami.6b14782 Michael D. Hoffman et al. The effect of mesenchymal stem cells delivered via hydrogel-based tissue engineered periosteum on bone allograft healing, Biomaterials (2013). DOI: 10.1016/j.biomaterials.2013.08.005 Peter Frederik Horstmann et al. Composite Biomaterial as a Carrier for Bone-Active Substances for Metaphyseal Tibial Bone Defect Reconstruction in Rats, Tissue Engineering Part A (2017). DOI: 10.1089/ten.TEA.2017.0040 , Biomaterials As a result, healing critical size defects in flat bones, such as the cranium is a challenge requiring optimized BTE strategies. Autograft bone flaps were preferred at first for cranioplasty to minimize immunological reactions, infections and foreign body recognition. Thereafter, scientists developed vascularized calvarium bone grafts as a preferred choice for cranial reconstruction in additional studies. However, the associated grafting strategies introduced complications during material resorption post-implantation and repair, alongside other clinical complications at the contact site between the implant and original bone. Regeneration and cell infiltration into a calvaria flap largely depends on progenitor cells that can migrate from the underlying dura or the overlying pericranial layers, to differentiate into active osteogenic cells for healing. If cell migration is occluded from the two membranes (dura and pericranium), bone formation would be significantly lower. Scientists had already determined the two membranes to be important in playing a specific role during regeneration, although with age the role of periosteum in cranium regeneration is less significant. In the present study, Teotia et al. developed the hypothesis that an osteoconductive surface could maintain cross-talk between the dura and pericranial layers for early vascularization and clinical success. To accomplish this, they generated a bilayered scaffold architecture that integrated a resorbable biphasic nano-hydroxyapatite-calcium sulphate ceramic nano-cement (NC) in the upper layer and silk-bioglass-hydroxyapatite composite porous cryogel (CG) as an underlying layer.Teotia et al. used the bilayered design to integrate the mechanical strength of NC as a protective upper layer and the porous composite CG layer as a surface for cell attachment, infiltration, proliferation and vascularization. The scientists expected the designed surfaces to maintain communication between the underlying dura and the overlying periosteal membranes. They functionalized the novel materials and implanted them in vivo in Wistar rats with critical cranial defects to evaluate the effect of bilayered porous architecture on osteoconduction and bone formation in preclinical, translational studies.During materials fabrication, the scientists cast the NC into a concave-convex shaped architecture to match the shape of the cranium and allowed it to set, to engineer multifunctional bilayered scaffolds for cranioplasty. They formed circular BCS discs composed of upper NC and lower CG and conducted surgical procedures on the animal models. During surgery, Teotia et al. implanted the scaffold discs at the site of defect and performed ex vivo micro-CT and radiological analysis on the excised and harvested calvarium after sacrificing the animal models, 12 weeks after disc implantation. The scientists completed radiological analyses of bone formation at the defect site to observe ossified tissue formation, using the nanoScan in vivo scanner for radiographical projections of the defect. They used micro-CT analysis to detect highly mineralized tissue (MT) formation and investigate defect filling in the 8.5 mm surgically induced circular defect (region of interest). By 12 weeks, mineralization did not achieve perfect closure in the animal model. The scientists used image quantifying software to show the highest amount of mineralized tissue formation in the BCS+ZA group, followed by the CG+ZA+BMP group, followed by CG+ZA+BMP and BCS+ZA+BMP groups. Post-harvest, the scientists fixed the cranium samples for histology analysis and conducted hematoxylin and eosin (H&E) and Masson’s trichrome staining of rat calvarias. They showed that both porous composite scaffold (CG) and the bilayered scaffold (NC+GC) (BCS) integrated well with existing bone at the site of the defect. The scaffolds provided porous surfaces for thorough cell infiltration. Teotia et al. also showed that functionalized scaffolds had consistently higher MT formation via histology assays due to the presence of osteoconductive and osteoinductive factors in the bioactive molecules composite compared to the non-functionalized groups. The histology results were consistent with the micro-CT results in the study. In this way, Teotia et al. showed that multifunctional composite scaffolds could replace auto or allografts in large size, bone defects in the cranium. They showed that the multifunctional materials were able to induce early vascularization and enhance mineralization in vivo. As expected, the composite scaffolds allowed porous osteoconductive communication between early cell infiltration from the periosteum and the underlying dura layers during rapid bone formation. The multifunctional materials hold promise to enhance bone mineralization and early defect healing post-implantation. Teotia et al. propose to conduct additional studies in large pre-clinical animal models to optimize and translate the new biomaterial for clinical applications. Explore furtherlast_img read more