Tag Archives: discovering

“A Great Surprise”: Scientists Stunned After Discovering How Cats Purr – Bored Panda

  1. “A Great Surprise”: Scientists Stunned After Discovering How Cats Purr Bored Panda
  2. We now know how cats purr—why they purr is still up for debate Ars Technica
  3. Domestic Cats Produce Purring-Like Sounds Differently than Previously Thought Sci.News
  4. Kim Catdashian! Women who speak with husky ‘vocal fry’ use the same technique as cats when they purr, study fi Daily Mail
  5. Kim Kardashian and Paris Hilton use mysterious ‘cat voice’ fans want to copy – but scientists say it has a… The US Sun
  6. View Full Coverage on Google News

Read original article here

Juri players are going crazy discovering instant overhead combos, frenzied Feng Shui Engine feet and so much more in Street Fighter 6

Street Fighter 6’s beta has been going strong for basically 2 whole days at this point, and people are digging as quickly as they can to find the most dirt possible.

Juri is probably the leading standout in that cause with players discovering some scary tech, setups, combos and more using all of the tools her fancy footwork allows.

Magistrix was one of the first to really catch our eye with Juri’s instant overhead confirm, which lets her hit jump medium punch into Overdrive divekick and EX Fuha charge to finish things off with a Critical Art that takes off 4,120 health from Luke.

This setup by itself doesn’t work against crouching opponents, but Magistrix apparently went back and found a fuzzy setup to make it even scarier to try pressing buttons.

Using a fairly similar idea, UFOrange showed off Juri’s unexpected Feng Shui Engine confirm from the OD divekick and Fuha charge while the opponent is still airborne, which opens up the door to even more combo possibilities.

Those looking to deal some extra damage with the villainess can probably steal some ideas from Tyrant since he landed a 40% combo in a real match. Be warned there is a little NSFW language in the clip.

You may think it too early to talk option selects, but EmmiMcD showed off an easy one with Juri’s spinning uppercut not coming out if the opponent blocks medium punch into medium kick.

That’s because only the first hit of the kick is special cancellable, so it basically works the same as Street Fighter 5 although now there’s potentially more she can do with that.

And if you hadn’t learned to be worried about getting Punish Countered by Juri yet, Incuteddy has you covered for why you should be.

Let us know in the comments if you’ve seen other cool applications with Juri and which character you’re most impressed with in SF6’s beta so far.



Read original article here

Discovering faster matrix multiplication algorithms with reinforcement learning

TensorGame

TensorGame is played as follows. The start position ({{mathscr{S}}}_{0}) of the game corresponds to the tensor ({mathscr{T}}) representing the bilinear operation of interest, expressed in some basis. In each step t of the game, the player writes down three vectors (u(t), v(t), w(t)), which specify the rank-1 tensor u(t)v(t)w(t), and the state of the game is updated by subtracting the newly written down factor:

$${{mathscr{S}}}_{t}leftarrow {{mathscr{S}}}_{t-1}-{{bf{u}}}^{(t)}otimes {{bf{v}}}^{(t)}otimes {{bf{w}}}^{(t)}.$$

(2)

The game ends when the state reaches the zero tensor, ({{mathscr{S}}}_{R}={bf{0}}). This means that the factors written down throughout the game form a factorization of the start tensor ({{mathscr{S}}}_{0}), that is, ({{mathscr{S}}}_{0}={sum }_{t=1}^{R}{{bf{u}}}^{(t)}otimes {{bf{v}}}^{(t)}otimes {{bf{w}}}^{(t)}). This factorization is then scored. For example, when optimizing for asymptotic time complexity the score is −R, and when optimizing for practical runtime the algorithm corresponding to the factorization ({{({{bf{u}}}^{(t)},{{bf{v}}}^{(t)},{{bf{w}}}^{(t)})}}_{t=1}^{R}) is constructed (see Algorithm 1) and then benchmarked on the fly (see Supplementary Information).

In practice, we also impose a limit Rlimit on the maximum number of moves in the game, so that a weak player is not stuck in unnecessarily (or even infinitely) long games. When a game ends  because it has run out of moves, a penalty score is given so that it is never advantageous to deliberately exhaust the move limit. For example, when optimizing for asymptotic time complexity, this penalty is derived from an upper bound on the tensor rank of the final residual tensor ({{mathscr{S}}}_{{R}_{text{limit}}}). This upper bound on the tensor rank is obtained by summing the matrix ranks of the slices of the tensor.

TensorGame over rings

We say that the decomposition of ({{mathscr{T}}}_{n}) in equation (1) is in a ring ({mathcal{E}}) (defining the arithmetic operations) if each of the factors u(t), v(t) and w(t) has entries belonging to the set ({mathcal{E}}), and additions and multiplications are interpreted according to ({mathcal{E}}). The tensor rank depends, in general, on the ring. At each step of TensorGame, the additions and multiplications in equation (2) are interpreted in ({mathcal{E}}). For example, when working in ({{mathbb{Z}}}_{2}), (in this case, the factors u(t), v(t) and w(t) live in F = {0, 1}), a modulo 2 operation is applied after each state update (equation (2)).

We note that integer-valued decompositions u(t), v(t) and w(t) lead to decompositions in arbitrary rings ({mathcal{E}}). Hence, provided F only contains integers, algorithms we find in standard arithmetic apply more generally to any ring.

AlphaTensor

AlphaTensor builds on AlphaZero1 and its extension Sampled AlphaZero21, combining a deep neural network with a sample-based MCTS search algorithm.

The deep neural network, fθ(s) = (π, z) parameterized by θ, takes as input the current state s of the game and outputs a probability distribution π(s) over actions and z(s) over returns (sum of future rewards) G. The parameters θ of the deep neural network are trained by reinforcement learning from self-play games and synthetic demonstrations. Self-play games are played by actors, running a sample-based MCTS search at every state st encountered in the game. The MCTS search returns an improved probability distribution over moves from which an action at is selected and applied to the environment. The sub-tree under at is reused for the subsequent search at st+1. At the end of the game, a return G is obtained and the trajectory is sent to the learner to update the neural network parameters θ. The distribution over returns z(st) is learned through distributional reinforcement learning using the quantile regression distributional loss34, and the network policy π(st) is updated using a Kullback–Leibler divergence loss, to maximize its similarity to the search policy for self-play games or to the next action for synthetic demonstrations. We use the Adam optimizer35 with decoupled weight decay36 to optimize the parameters θ of the neural network.

Sample-based MCTS search

The sample-based MCTS search is very similar to the one described in Sampled AlphaZero. Specifically, the search consists of a series of simulated trajectories of TensorGame that are aggregated in a tree. The search tree therefore consists of nodes representing states and edges representing actions. Each state-action pair (s, a) stores a set of statistics (N(s,a),Q(s,a),hat{pi }(s,a)), where N(s, a) is the visit count, Q(s, a) is the action value and (hat{pi }(s,a)) is the empirical policy probability. Each simulation traverses the tree from the root state s0 until a leaf state sL is reached by recursively selecting in each state s an action a that has not been frequently explored, has high empirical policy probability and high value. Concretely, actions within the tree are selected by maximizing over the probabilistic upper confidence tree bound21,37

$$mathop{{rm{argmax}}}limits_{a}Q(s,a)+c(s)cdot hat{pi }(s,a)frac{sqrt{{sum }_{b}N(s,b)}}{1+N(s,a)},$$

where c(s) is an exploration factor controlling the influence of the empirical policy (hat{pi }(s,a)) relative to the values Q(s, a) as nodes are visited more often. In addition, a transposition table is used to recombine different action sequences if they reach the exact same tensor. This can happen particularly often in TensorGame as actions are commutative. Finally, when a leaf state sL is reached, it is evaluated by the neural network, which returns K actions {ai} sampled from π(asL), alongside the empirical distribution (hat{pi }(a| {s}_{{rm{L}}})=frac{1}{K}{sum }_{i}{delta }_{a,{a}_{i}}) and a value v(sL) constructed from z(sL). Differently from AlphaZero and Sampled AlphaZero, we chose v not to be the mean of the distribution of returns z(sL) as is usual in most reinforcement learning agents, but instead to be a risk-seeking value, leveraging the facts that TensorGame is a deterministic environment and that we are primarily interested in finding the best trajectory possible. The visit counts and values on the simulated trajectory are then updated in a backward pass as in Sampled AlphaZero.

Policy improvement

After simulating N(s) trajectories from state s using MCTS, the normalized visit counts of the actions at the root of the search tree N(s, a)/N(s) form a sample-based improved policy. Differently from AlphaZero and Sampled AlphaZero, we use an adaptive temperature scheme to smooth the normalized visit counts distribution as some states can accumulate an order of magnitude more visits than others because of sub-tree reuse and transposition table. Concretely, we define the improved policy as ({mathcal{I}}hat{pi }(s,a)={N}^{1/tau (s)}(s,a)/{sum }_{b}{N}^{1/tau (s)}(s,b)) where (tau (s)=log N(s)/log bar{N},{rm{if}},N > bar{N}) and 1 otherwise, with (bar{N}) being a hyperparameter. For training, we use ({mathcal{I}}hat{pi }) directly as a target for the network policy π. For acting, we additionally discard all actions that have a value lower than the value of the most visited action, and sample proportionally to ({mathcal{I}}hat{pi }) among those remaining high-value actions.

Learning one agent for multiple target tensors

We train a single agent to decompose the different tensors ({{mathscr{T}}}_{n,m,p}) in a given arithmetic (standard or modular). As the network works with fixed-size inputs, we pad all tensors (with zeros) to the size of the largest tensor we consider (({{mathscr{T}}}_{5}), of size 25 × 25 × 25). At the beginning of each game, we sample uniformly at random a target ({{mathscr{T}}}_{n,m,p}), and play TensorGame. Training a single agent on different targets leads to better results thanks to the transfer between targets. All our results reported in Fig. 3 are obtained using multiple runs of this multi-target setting. We also train a single agent to decompose tensors in both arithmetics. Owing to learned transfer between the two arithmetics, this agent discovers a different distribution of algorithms (of the same ranks) in standard arithmetic than the agent trained on standard arithmetic only, thereby increasing the overall diversity of discovered algorithms.

Synthetic demonstrations

The synthetic demonstrations buffer contains tensor-factorization pairs, where the factorizations ({{({{bf{u}}}^{(r)},{{bf{v}}}^{(r)},{{bf{w}}}^{(r)})}}_{r=1}^{R}) are first generated at random, after which the tensor ({mathscr{D}}={sum }_{r=1}^{R}{{bf{u}}}^{(r)}otimes {{bf{v}}}^{(r)}otimes {{bf{w}}}^{(r)}) is formed. We create a dataset containing 5 million such tensor-factorization pairs. Each element in the factors is sampled independently and identically distributed (i.i.d.) from a given categorical distribution over F (all possible values that can be taken). We discarded instances whose decompositions were clearly suboptimal (contained a factor with u = 0, v = 0, or w = 0).

In addition to these synthetic demonstrations, we further add to the demonstration buffer previous games that have achieved large scores to reinforce the good moves made by the agent in these games.

Change of basis

The rank of a bilinear operation does not depend on the basis in which the tensor representing it is expressed, and for any invertible matrices A, B and C we have ({rm{Rank}},({mathscr{T}})={rm{Rank}},({{mathscr{T}}}^{({bf{A}},{bf{B}},{bf{C}})})), where ({{mathscr{T}}}^{({bf{A}},{bf{B}},{bf{C}})}) is the tensor after change of basis given by

$${{mathscr{T}}}_{ijk}^{({bf{A}},{bf{B}},{bf{C}})}=mathop{sum }limits_{a=1}^{S}mathop{sum }limits_{b=1}^{S}mathop{sum }limits_{c=1}^{S}{{bf{A}}}_{ia}{{bf{B}}}_{jb}{{bf{C}}}_{kc}{{mathscr{T}}}_{abc}.$$

(3)

Hence, exhibiting a rank-R decomposition of the matrix multiplication tensor ({{mathscr{T}}}_{n}) expressed in any basis proves that the product of two n × n matrices can be computed using R scalar multiplications. Moreover, it is straightforward to convert such a rank-R decomposition into a rank-R decomposition in the canonical basis, thus yielding a practical algorithm of the form shown in Algorithm 1. We leverage this observation by expressing the matrix multiplication tensor ({{mathscr{T}}}_{n}) in a large number of randomly generated bases (typically 100,000) in addition to the canonical basis, and letting AlphaTensor play games in all bases in parallel.

This approach has three appealing properties: (1) it provides a natural exploration mechanism as playing games in different bases automatically injects diversity into the games played by the agent; (2) it exploits properties of the problem as the agent need not succeed in all bases—it is sufficient to find a low-rank decomposition in any of the bases; (3) it enlarges coverage of the algorithm space because a decomposition with entries in a finite set F = {−2, −1, 0, 1, 2} found in a different basis need not have entries in the same set when converted back into the canonical basis.

In full generality, a basis change for a 3D tensor of size S × S × S is specified by three invertible S × S matrices A, B and C. However, in our procedure, we sample bases at random and impose two restrictions: (1) A = B = C, as this performed better in early experiments, and (2) unimodularity ((det {bf{A}}in {-1,+1})), which ensures that after converting an integral factorization into the canonical basis it still contains integer entries only (this is for representational convenience and numerical stability of the resulting algorithm). See Supplementary Information for the exact algorithm.

Signed permutations

In addition to playing (and training on) games in different bases, we also utilize a data augmentation mechanism whenever the neural network is queried in a new MCTS node. At acting time, when the network is queried, we transform the input tensor by applying a change of basis—where the change of basis matrix is set to a random signed permutation. We then query the network on this transformed input tensor, and finally invert the transformation in the network’s policy predictions. Although this data augmentation procedure can be applied with any generic change of basis matrix (that is, it is not restricted to signed permutation matrices), we use signed permutations mainly for computational efficiency. At training time, whenever the neural network is trained on an (input, policy targets, value target) triplet (Fig. 2), we apply a randomly chosen signed permutation to both the input and the policy targets, and train the network on this transformed triplet. In practice, we sample 100 signed permutations at the beginning of an experiment, and use them thereafter.

Action canonicalization

For any λ1, λ2, λ3 {−1, +1} such that λ1λ2λ3 = 1, the actions (λ1u, λ2v, λ3w) and (u, v, w) are equivalent because they lead to the same rank-one tensor (λ1u)  (λ2v)  (λ3w) = uvw. To prevent the network from wasting capacity on predicting multiple equivalent actions, during training we always present targets (u, v, w) for the policy head in a canonical form, defined as having the first non-zero element of u and the first non-zero element of v strictly positive. This is well defined because u or v cannot be all zeros (if they are to be part of a minimal rank decomposition), and for any (u, v, w) there are unique λ1, λ2, λ3 {−1, +1} (with λ1λ2λ3 = 1) that transform it into canonical form. In case the network predicts multiple equivalent actions anyway, we merge them together (summing their empirical policy probabilities) before inserting them into the MCTS tree.

Training regime

We train AlphaTensor on a TPU v3, with a total batch size of 2,048. We use 64 TPU cores, and train for 600,000 iterations. On the actor side, the games are played on standalone TPU v4, and we use 1,600 actors. In practice, the procedure  takes a week to converge.

Neural network

The architecture is composed of a torso, followed by a policy head that predicts a distribution over actions, and a value head that predicts a distribution of the returns from the current state (see Extended Data Fig. 3).

Input

The input to the network contains all the relevant information of the current state and is composed of a list of tensors and a list of scalars. The most important piece of information is the current 3D tensor ({{mathscr{S}}}_{t}) of size S × S × S. (For simplicity, in the description here we assume that all the three dimensions of the tensor are equal in size. The generalization to different sizes is straightforward.) In addition, the model is given access to the last h actions (h being a hyperparameter usually set to 7), represented as h rank-1 tensors that are concatenated to the input. The list of scalars includes the time index t of the current action (where 0 ≤ t < Rlimit).

Torso

The torso of the network is in charge of mapping both scalars and tensors from the input to a representation that is useful to both policy and value heads. Its architecture is based on a modification of transformers23, and its main signature is that it operates over three S × S grids projected from the S × S × S input tensors. Each grid represents two out of the three modes of the tensor. Defining the modes of the tensor as ({mathcal{U}},{mathcal{V}},{mathcal{W}}), the rows and columns of the first grid are associated to ({mathcal{U}}) and ({mathcal{V}}), respectively, the rows and columns of the second grid are associated to ({mathcal{W}}) and ({mathcal{U}}), and the rows and columns of the third grid are associated to ({mathcal{V}}) and ({mathcal{W}}). Each element of each grid is a feature vector, and its initial value is given by the elements of the input tensors along the grid’s missing mode. These feature vectors are enriched by concatenating an S × S × 1 linear projection from the scalars. This is followed by a linear layer projecting these feature vectors into a 512-dimensional space.

The rest of the torso is a sequence of attention-based blocks with the objective of propagating information between the three grids. Each of those blocks has three stages, one for every pair of grids. In each stage, the grids involved are concatenated, and axial attention24 is performed over the columns. It is noted that in each stage we perform in parallel S self-attention operations of 2S elements in each. The representation sent to the policy head corresponds to the 3S512-dimensional feature vectors produced by the last layer of the torso. A detailed description of the structure of the torso is specified in Extended Data Fig. 4 (top) and Appendix A.1.1 in Supplementary Information.

Policy head

The policy head uses the transformer architecture23 to model an autoregressive policy. Factors are decomposed into k tokens of dimensionality d such that k × d = 3S. The transformer conditions on the tokens already generated and cross-attends to the features produced by the torso. At training time, we use teacher-forcing, that is, the ground truth actions are decomposed into tokens and taken as inputs into the causal transformer in such a way that the prediction of a token depends only on the previous tokens. At inference time, K actions are sampled from the head. The feature representation before the last linear layer of the initial step (that is, the only step that is not conditioned on the ground truth) is used as an input to the value head, described below. Details of the architecture are presented in Extended Data Fig. 4 (centre) and Appendix A.1.2 in Supplementary Information.

Value head

The value head is composed of a four-layer multilayer perceptron whose last layer produces q outputs corresponding to the (frac{1}{2q},frac{3}{2q},ldots frac{2q-1}{2q}) quantiles. In this way, the value head predicts the distribution of returns from this state in the form of values predicted for the aforementioned quantiles34. At inference time, we encourage the agent to be risk-seeking by using the average of the predicted values for quantiles over 75%. A detailed description of the value head is presented in Extended Data Fig. 4 (bottom) and Appendix A.1.3 in Supplementary Information.

Related work

The quest for efficient matrix multiplication algorithms started with Strassen’s breakthrough in ref. 2, which showed that one can multiply 2 × 2 matrices using 7 scalar multiplications, leading to an algorithm of complexity ({mathcal{O}}({n}^{2.81})). This led to the development of a very active field of mathematics attracting worldwide interest, which studies the asymptotic complexity of matrix multiplication (see refs. 3,4,5,6). So far, the best known complexity for matrix multiplication is ({mathcal{O}}({n}^{2.37286})) (ref. 12), which improves over ref. 11, and builds on top of fundamental results in the field8,9,10. However, this does not yield practical algorithms, as such approaches become advantageous only for astronomical matrix sizes. Hence, a significant body of work aims at exhibiting explicit factorizations of matrix multiplication tensors, as these factorizations provide practical algorithms. After Strassen’s breakthrough showing that (text{rank},({{mathscr{T}}}_{2})le 7), efficient algorithms for larger matrix sizes were found15,16,18,26,38. Most notably, Laderman showed in ref. 15 that 3 × 3 matrix multiplications can be performed with 23 scalar multiplications. In addition to providing individual low-rank factorizations, an important research direction aims at understanding the space of matrix multiplication algorithms—as opposed to exhibiting individual low-rank factorizations—by studying the symmetry groups and diversity of factorizations (see ref. 5 and references therein). For example, the symmetries of 2 × 2 matrix multiplication were studied in refs. 39,40,41,42, where Strassen’s algorithm was shown to be essentially unique. The case of 3 × 3 was studied in ref. 43, whereas a symmetric factorization for all n is provided in ref. 44.

On the computational front, continuous optimization has been the main workhorse for decomposing tensors17,45,46, and in particular matrix multiplication tensors. Such continuous optimization procedures (for example, alternating least squares), however, yield approximate solutions, which correspond to inexact matrix multiplication algorithms with floating point operations. To circumvent this issue, regularization procedures have been proposed, such as ref. 18, to extract exact decompositions. Unfortunately, such approaches often require  substantial human intervention and expertise to decompose large tensors. A different line of attack was explored in refs. 47,48, based on learning the continuous weights of a two-layer network that mimics the structure of the matrix multiplication operation. This method, which is trained through supervised learning of matrix multiplication examples, finds approximate solutions to 2 × 2 and 3 × 3 matrix multiplications. In ref. 48, a quantization procedure is further used to obtain an exact decomposition for 2 × 2. Unlike continuous optimization-based approaches, AlphaTensor directly produces algorithms from the desired set of valid algorithms, and is flexible in that it allows us to optimize a wide range of (even non-differentiable) objectives. This unlocks tackling broader settings (for example, optimization in finite fields, optimization of runtime), as well as larger problems (for example, ({{mathscr{T}}}_{4}) and ({{mathscr{T}}}_{5})) than those previously considered. Different from continuous optimization, a boolean satisfiability (SAT) based  formulation of the problem of decomposing 3 × 3 matrix multiplication was recently proposed in ref. 20, which adds thousands of new decompositions of rank 23 to the list of known 3 × 3 factorizations. The approach relies on a state-of-the-art SAT solving procedure, where several assumptions and simplifications are made on the factorizations to reduce the search space. As is, this approach is, however, unlikely to scale to larger tensors, as the search space grows very quickly with the size.

On the practical implementation front, ref. 31 proposed several ideas to speed up implementation of fast matrix multiplication algorithms on central processing units (CPUs). Different fast algorithms are then compared and benchmarked, and the potential speed-up of such algorithms is shown against standard multiplication. Other works focused on getting the maximal performance out of a particular fast matrix multiplication algorithm (Strassen’s algorithm with one or two levels of recursion) on a CPU32 or a GPU49. These works show that, despite popular belief, such algorithms are of practical value. We see writing a custom low-level implementation of a given algorithm to be distinct from the focus of this paper—developing new efficient algorithms—and we believe that the algorithms we discovered can further benefit from a more efficient implementation by experts.

Beyond matrix multiplication and bilinear operations, a growing amount of research studies the use of optimization and machine learning to improve the efficiency of computational operations. There are three levels of abstractions at which this can be done: (1) in the hardware design, for example, chip floor planning50, (2) at the hardware–software interface, for example, program super-optimization of a reference implementation for specific hardware51, and (3) on the algorithmic level, for example, program induction52, algorithm selection53 or meta-learning54. Our work focuses on the algorithmic level of abstraction, although AlphaTensor is also flexible to discover efficient algorithms for specific hardware. Different from previous works, we focus on discovering matrix multiplication algorithms that are provably correct, without requiring initial reference implementations. We conclude by relating our work broadly to existing reinforcement learning methods for scientific discovery. Within mathematics, reinforcement learning was applied, for example, to theorem proving55,56,57,58, and to finding counterexamples refuting conjectures in combinatorics and graph theory59. Reinforcement learning was further shown to be useful in many areas in science, such as molecular design60,61 and synthesis62 and optimizing quantum dynamics63.

Read original article here

Super-Earths are bigger, more common and more habitable than Earth itself – and astronomers are discovering more of the billions they think are out there

Astronomers think the most likely place to find life in the galaxy is on super-Earths, like Kepler-69c, seen in this artist’s rendering. (Image credit: NASA Ames/JPL-CalTech)

This article was originally published at The Conversation. (opens in new tab) The publication contributed the article to Space.com’s Expert Voices: Op-Ed & Insights.

Astronomers now routinely discover planets orbiting stars outside of the solar system — they’re called exoplanets. But in summer 2022, teams working on NASA’s Transiting Exoplanet Survey Satellite found a few particularly interesting planets orbiting in the habitable zones of their parent stars.

One planet is 30% larger than Earth (opens in new tab) and orbits its star in less than three days. The other is 70% larger than the Earth (opens in new tab) and might host a deep ocean. These two exoplanets are super-Earths — more massive than the Earth but smaller than ice giants like Uranus and Neptune.

I’m a professor of astronomy (opens in new tab) who studies galactic cores, distant galaxies, astrobiology (opens in new tab) and exoplanets. I closely follow the search for planets that might host life.

Earth is still the only place in the universe scientists know to be home to life. It would seem logical to focus the search for life on Earth clones — planets with properties close to Earth’s. But research has shown that the best chance astronomers have of finding life on another planet is likely to be on a super-Earth similar to the ones found recently.

Related: ‘We can find life outside the solar system in 25 years,’ researcher says

A super-Earth is any rocky planet that is bigger than Earth and smaller than Neptune. (Image credit: Aldaron, CC BY-SA)

Common and easy to find

Most super-Earths orbit cool dwarf stars, which are lower in mass and live much longer than the sun. There are hundreds of cool dwarf stars for every star like the Sun, and scientists have found super-Earths orbiting 40% of cool dwarfs (opens in new tab) they have looked at. Using that number, astronomers estimate that there are tens of billions (opens in new tab) of super-Earths in habitable zones where liquid water can exist in the Milky Way alone. Since all life on Earth uses water, water is thought to be critical for habitability.

Based on current projections, about a third of all exoplanets (opens in new tab) are super-Earths, making them the most common type of exoplanet in the Milky Way. The nearest is only 6 light-years away (opens in new tab) from Earth. You might even say that our solar system is unusual since it does not have a planet with a mass between that of Earth and Neptune.

Most exoplanets are discovered by looking for how they dim the light coming from their parent stars, so bigger planets are easier to find. (Image credit: Nikola Smolenski, CC BY-SA)

Another reason super-Earths are ideal targets in the search for life is that they’re much easier to detect and study (opens in new tab) than Earth-sized planets. There are two methods astronomers use to detect exoplanets. One looks for the gravitational effect of a planet on its parent star and the other looks for brief dimming of a star’s light as the planet passes in front of it. Both of these detection methods are easier with a bigger planet.

Super-Earths are super habitable

Over 300 years ago, German philosopher Gottfried Wilhelm Leibniz argued that Earth was the “best of all possible worlds (opens in new tab).” Leibniz’s argument was meant to address the question of why evil exists, but modern astrobiologists have explored a similar question by asking what makes a planet hospitable to life. It turns out that Earth is not the best of all possible worlds.

Due to Earth’s tectonic activity and changes in the brightness of the sun, the climate has veered over time from ocean-boiling hot to planetwide, deep-freeze cold. Earth has been uninhabitable for humans and other larger creatures for most of its 4.5-billion-year history. Simulations suggest the long-term habitability of Earth was not inevitable (opens in new tab), but was a matter of chance. Humans are literally lucky to be alive.

Researchers have come up with a list of the attributes (opens in new tab) that make a planet very conducive to life. Larger planets are more likely to be geologically active, a feature that scientists think would promote biological evolution (opens in new tab). So the most habitable planet would have roughly twice the mass of the Earth and be between 20% and 30% larger by volume. It would also have oceans that are shallow enough for light to stimulate life all the way to the seafloor and an average temperature of 77 degrees Fahrenheit (25 degrees Celsius). It would have an atmosphere thicker than the Earth’s that would act as an insulating blanket. Finally, such a planet would orbit a star older than the Sun to give life longer to develop, and it would have a strong magnetic field that protects against cosmic radiation (opens in new tab). Scientists think that these attributes combined will make a planet super habitable.

By definition, super-Earths have many of the attributes of a super habitable planet. To date, astronomers have discovered two dozen super-Earth exoplanets (opens in new tab) that are, if not the best of all possible worlds, theoretically more habitable than Earth.

Recently, there’s been an exciting addition to the inventory of habitable planets. Astronomers have started discovering exoplanets (opens in new tab) that have been ejected from their star systems (opens in new tab), and there could be billions of them (opens in new tab) roaming the Milky Way. If a super-Earth is ejected from its star system and has a dense atmosphere and watery surface, it could sustain life for tens of billions of years (opens in new tab), far longer than life on Earth could persist before the sun dies.

One of the newly discovered super-Earths, TOI-1452b, might be covered in a deep ocean and could be conducive to life. (Image credit: Benoit Gougeon, Université de Montréal, CC BY-ND)

Detecting life on super-Earths

To detect life on distant exoplanets, astronomers will look for biosignatures, byproducts of biology (opens in new tab) that are detectable in a planet’s atmosphere.

NASA’s James Webb Space Telescope was designed before astronomers had discovered exoplanets, so the telescope is not optimized for exoplanet research. But it is able to do some of this science and is scheduled to target two potentially habitable super-Earths in its first year of operations. Another set of super-Earths with massive oceans discovered in the past few years, as well as the planets discovered this summer, are also compelling target (opens in new tab)s for James Webb.

But the best chances for finding signs of life in exoplanet atmospheres will come with the next generation of giant, ground-based telescopes: the 39-meter Extremely Large Telescope (opens in new tab), the Thirty Meter Telescope (opens in new tab) and the 25.4-meter Giant Magellan Telescope (opens in new tab). These telescopes are all under construction and set to start collecting data by the end of the decade.

Astronomers know that the ingredients for life are out there, but habitable does not mean inhabited. Until researchers find evidence of life elsewhere, it’s possible that life on Earth was a unique accident. While there are many reasons why a habitable world would not have signs of life, if, over the coming years, astronomers look at these super habitable super-Earths and find nothing, humanity may be forced to conclude that the universe is a lonely place.

Editor’s Note: The story has been updated to correct the size of the Giant Magellan Telescope.

This article is republished from The Conversation (opens in new tab) under a Creative Commons license. Read the original article (opens in new tab).

Follow all of the Expert Voices issues and debates — and become part of the discussion — on Twitter at @Spacedotcom and on Facebook. The views expressed are those of the author and do not necessarily reflect the views of the publisher.



Read original article here

China plans three missions to the Moon after discovering a new lunar mineral that may be a future energy source

A full moon rises over Beijing in May.Getty Images

  • China aims to launch three moon missions over the next decade as part of its Chang’e lunar program.

  • China’s National Space Administration won approval for the missions after it found a new mineral. 

  • The mineral, Changesite-(Y), could be a future source of energy and was found in lunar samples.

China is aiming to launch three unmanned missions to the moon after discovering a new lunar mineral that could be an energy source in the future.

The space race between China and the US is accelerating after Beijing’s National Space Administration got the go-ahead to launch three orbiters to the moon over the next 10 years, it announced on Saturday. The news was first reported by Bloomberg. 

It comes a day after China became the third country to discover a new lunar mineral, which it called Changesite-(Y), according to Chinese state-controlled newspaper the Global Times.

China’s Chang’e-5 mission retrieved samples from the moon in 2020 and it has been described by Global Times as a “phosphate mineral in columnar crystal” found in lunar rock particles. The mineral contains helium-3, which could be a future source of energy.

The discovery may put more pressure on the US to ramp up its efforts after its Artemis I moon mission was postponed for a second time.

Moon mining could be the next source of tension between the countries as NASA is also probing the moon’s south pole where China plans to build a research station in conjunction with Russia.

China has accelerated its efforts in space exploration of late by building a space station, launching a number of missions to collect moon samples and putting a rover called Zhurong on Mars earlier this year to rival NASA. 

The US remains the only country to put astronauts on the moon, with the last landing almost 50 years ago in the Apollo 17 mission, according to NASA’s website.

The US Apollo 11 mission was the first to bring samples from another planet back to Earth in July 1969, with about 49 pounds (22 kilograms) of material from the moon’s surface.

China’s National Space Administration was contacted for comment.

Read the original article on Business Insider

Read original article here

China Plans Three Moon Missions After Discovering New Lunar Mineral

  • China aims to launch three moon missions over the next decade as part of its Chang’e lunar program.
  • China’s National Space Administration won approval for the missions after it found a new mineral. 
  • The mineral, Changesite-(Y), could be a future source of energy and was found in lunar samples.

China is aiming to launch three unmanned missions to the moon after discovering a new lunar mineral that could be an energy source in the future.

The space race between China and the US is accelerating after Beijing’s National Space Administration got the go-ahead to launch three orbiters to the moon over the next 10 years, it announced on Saturday. The news was first reported by Bloomberg. 

It comes a day after China became the third country to discover a new lunar mineral, which it called Changesite-(Y), according to Chinese state-controlled newspaper the Global Times. 

China’s Chang’e-5 mission retrieved samples from the moon in 2020 and it has been described by Global Times as a “phosphate mineral in columnar crystal” found in lunar rock particles. The mineral contains helium-3, which could be a future source of energy.

The discovery may put more pressure on the US to ramp up its efforts after its Artemis I moon mission was postponed for a second time.

Moon mining could be the next source of tension between the countries as NASA is also probing the moon’s south pole where China plans to build a research station in conjunction with Russia. 

China has accelerated its efforts in space exploration of late by building a space station, launching a number of missions to collect moon samples and putting a rover called Zhurong on Mars earlier this year to rival NASA. 

The US remains the only country to put astronauts on the moon, with the last landing almost 50 years ago in the Apollo 17 mission, according to NASA’s website. 

The US Apollo 11 mission was the first to bring samples from another planet back to Earth in July 1969, with about 49 pounds (22 kilograms) of material from the moon’s surface. 

China’s National Space Administration was contacted for comment. 

Read original article here

Discovering a Major Contributor to Alzheimer’s Disease

Summary: Researchers identified a pathway that begins in the gut and ends with a pro-inflammatory protein in the brain that appears to contribute to the development and progression of Alzheimer’s disease.

Source: LSU

Research led by Drs. Yuhai Zhao and Walter J Lukiw at the LSU Health New Orleans Neuroscience Center and the Departments of Cell Biology and Anatomy, Neurology and Ophthalmology, reports for the first time a pathway that begins in the gut and ends with a potent pro-inflammatory toxin in brain cells contributing to the development of Alzheimer’s disease (AD). They also report a simple way to prevent it.

Results are published in Frontiers in Neurology.

The researchers found evidence that a molecule containing a very potent microbial-generated neurotoxin (lipopolysaccharide or LPS) derived from the Gram-negative bacteria Bacteroides fragilis in the human gastrointestinal (GI) tract generates a neurotoxin known as BF-LPS.

“LPSs in general are probably the most potent microbial-derived pro-inflammatory neurotoxic glycolipids known,” says Dr. Lukiw. “Many laboratories, including our own, have detected different forms of LPS within neurons of the Alzheimer’s disease-affected human brain.”

In this study, the researchers detail the pathway of BF-LPS from the gut to the brain and its mechanisms of action once there. BF-LPS leaks out of the GI tract, crosses the blood-brain barrier via the circulatory system, and accesses brain compartments. Then it increases inflammation in brain cells and inhibits neuron-specific neurofilament light (NF-L,) a protein that supports cell integrity.

A deficit of this protein leads to progressive neuronal cell atrophy, and ultimately cell death, as is observed in AD-affected neurons. They also report that adequate intake of dietary fiber can head off the process.

The novel features of this newly described pathological pathway are threefold. The AD-stimulating pathway begins inside of us—in our GI-tract microbiome—and therefore is very “locally sourced” and active throughout our lives.

BF-LPS leaks out of the GI tract, crosses the blood brain barrier via the circulatory system, and accesses brain compartments. Credit: LSU

The highly potent neurotoxin BF-LPS is a natural by-product of GI-tract-based microbial metabolism. Bacteroides fragilis abundance in the microbiome, which is the source of the neurotoxin BF-LPS, can be regulated by dietary fiber intake.

“Put another way, dietary-based approaches to balance the microorganisms in the microbiome may be an attractive means to modify the abundance, speciation, and complexity of enterotoxigenic forms of AD-relevant microbes and their potential for the pathological discharge of highly neurotoxic microbial-derived secretions that include BF-LPS and other forms of LPS,” Lukiw explains.

The researchers conclude that an improved understanding of the interaction between the GI tract-Central Nervous System axis and the GI-tract microbiome and Alzheimer’s disease has considerable potential to lead to new diagnostic and therapeutic strategies in the clinical management of Alzheimer’s disease and other lethal, progressive, and age-related neurodegenerative disorders.

It has been estimated that Americans eat 10–15 grams of fiber a day on average. The USDA recommends that women up to age 50 consume 25 grams a day and men 38 grams. Over age 50, women and men should consume 21 and 30 grams daily, respectively.

According to the National Institutes of Health, Alzheimer’s disease is the most common diagnosis for patients with dementia and the sixth leading cause of death for Americans. Experts estimate that as many as 5.8 million Americans 65 and older have Alzheimer’s disease, and the prevalence in the United States is projected to increase to 13.8 million by 2050.

About this Alzheimer’s disease research news

Author: Press Office
Source: LSU
Contact: Press Office – LSU
Image: The image is credited to LSU

See also

Original Research: Open access.
“Downregulation of Neurofilament Light Chain Expression in Human Neuronal-Glial Cell Co-Cultures by a Microbiome-Derived Lipopolysaccharide-Induced miRNA-30b-5p” by Aileen I. Pogue et al. Frontiers in Neurology


Abstract

Downregulation of Neurofilament Light Chain Expression in Human Neuronal-Glial Cell Co-Cultures by a Microbiome-Derived Lipopolysaccharide-Induced miRNA-30b-5p

Microbiome-derived Gram-negative bacterial lipopolysaccharide (LPS) has been shown by multiple laboratories to reside within Alzheimer’s disease (AD)-affected neocortical and hippocampal neurons. LPS and other pro-inflammatory stressors strongly induce a defined set of NF-kB (p50/p65)-sensitive human microRNAs, including a brain-enriched Homo sapien microRNA-30b-5p (hsa-miRNA-30b-5p; miRNA-30b).

Here we provide evidence that this neuropathology-associated miRNA, known to be upregulated in AD brain and LPS-stressed human neuronal-glial (HNG) cells in primary culture targets the neurofilament light (NF-L) chain mRNA 3′-untranslated region (3′-UTR), which is conducive to the post-transcriptional downregulation of NF-L expression observed within both AD and LPS-treated HNG cells.

A deficiency of NF-L is associated with consequent atrophy of the neuronal cytoskeleton and the disruption of synaptic organization. Interestingly, miRNA-30b has previously been shown to be highly expressed in amyloid-beta (Aβ) peptide-treated animal and cell models, and Aβ peptides promote LPS entry into neurons. Increased miRNA-30b expression induces neuronal injury, neuron loss, neuronal inflammation, impairment of synaptic transmission, and synaptic failure in neurodegenerative disease and transgenic murine models.

This gut microbiota-derived LPS-NF-kB-miRNA-30b-NF-L pathological signaling network: (i) underscores a positive pathological link between the LPS of gastrointestinal (GI)-tract microbes and the inflammatory neuropathology, disordered cytoskeleton, and disrupted synaptic signaling of the AD brain and stressed brain cells; and (ii) is the first example of a microbiome-derived neurotoxic glycolipid having significant detrimental miRNA-30b-mediated actions on the expression of NF-L, an abundant neuron-specific filament protein known to be important in the maintenance of neuronal cell shape, axonal caliber, and synaptic homeostasis.

Read original article here

12 women in Thailand fear lawsuit after discovering they all had sex with the same married man

Twelve Thai women approached a lawyer for advice after discovering that the same man they all met online and carried on separate sexual relationships with is married.

The 60-year-old married man, identified as Pu, reportedly created eight Facebook accounts, four Instagram accounts and three Line accounts to draw women into sexual relationships.

Pu, who allegedly targeted single mothers, told the women that he worked as the managing director of a Japanese company in Thailand.

His infidelity came to light when one of the women, Noo Ya, 56, revealed on Facebook how Pu ended their relationship after she posted about them on social media. She claimed Pu told her that he had divorced his wife eight years ago and wanted to be in a serious relationship with her. Ya said their relationship went well in the beginning, but Pu had always wanted to keep their relationship a secret.

More from NextShark: 12-Year-Old Fashion Designer Sews Masks for Doctors and Nurses Fighting COVID-19

The Facebook post apparently caught the attention of other women, all aged 40-60, who knew Pu and had similar experiences. Eleven other women came forward, revealing how the philanderer had also cheated on them, according to Ya.

The 12 victims, who did not know each other before, approached Thai lawyer Ratchapon Sirisakhon on July 17 in the central province of Nonthaburi. The women asked for legal advice as they were afraid of being sued by Pu’s wife, according to Sirisakhon. Thai law allows the first legal wife of a man to sue those who have an affair with their husband, granted that they are were not “lured” into a relationship, according to The Thaiger.

There have reportedly been mistresses in Thailand who were sued for between 500,000 baht (approximately $13,645) to 5,000,000 baht (approximately $136,453) in the past.

More from NextShark: Tokyo Olympics Could Be Cancelled in May Because of Coronavirus

Ya said she wanted to apologize to Pu’s wife for how she and the other women became his mistresses without being aware of the man’s existing marriage.

The lawyer advised the women to file separate complaints against Pu to the police.

 

More from NextShark: Chinese woman in Australia filmed ripping down Tiananmen Square posters, denying Uyghur ‘camps’ claim

Featured Image via Maru Lombardo

Enjoy this content? Read more from NextShark!

Maskless Woman Goes on Racist ‘Kung Fu’ Attack on Drug Store Manager in Toronto

Read original article here

Alaçati: Discovering Western Turkey’s Hidden Gem

(CNN) — A brass band fills the air with lively Turkish music as it marches down a cobblestone street. Tourists and locals alike clap and dance, packing the main square.

The musicians are dressed in turquoise uniforms with gold epaulettes — colors that match the azure-blue of the Aegean waters of Alaçatı, located on the Cesme peninsula in western Turkey (or Türkiye).

With its rustic stone houses with vibrantly painted wooden doors, winding streets filled with upscale restaurants, modern art galleries and boutique stores, Alaçatı has all the hallmarks of a typical Mediterranean town, albeit with an effortless Turkish charm.

Turkish holidaymakers have been flocking to this picturesque destination, situated about an hour’s drive southwest from Izmir, since a revival took place in the early 2000s. But international visitors are just playing catch up.

Everywhere you turn, people are sipping on Turkish tea in the town’s open-air cafes, taking in the fresh air and sunshine.

Colorful town

Alacati’s cobbled alleyways are filled with brightly colored houses.

Murat TopayAdobe Stock

And it’s not just holidaymakers who’ve been won over — windsurfers have also become a fixture here due to the strong, year-round winds of the nearby bay, making Alaçatı the self-proclaimed windsurfing capital of Turkey.

“The air that you breathe here is precious,” says Perihan Akbulut, who runs the Kurabiye Hotel, which is a few minutes walk from the stunning Kurabiye Stone House.

“There is always a breeze that makes you feel better — even if it is so hot during summer months.”

CNN’s Richard Quest travels to western Turkey to discover why they call Izmir the “Pearl of the Aegean”.

Kurabiye Hotel could easily be confused with a boutique Aegean guest house thanks to its whitewashed walls and blue doors.

Akbulut moved here from Istanbul in 2009 and opened the hotel, which has a courtyard adorned with a lemon garden at its center, the following year.

She and her husband now live between the two destinations — enjoying big city life during the winter and retreating to the pretty town for the summer season.

“It is so easy — or let’s say simple — to organize your life, here in Alaçatı,” she adds.

“But that’s exactly why it appeals to me; it’s a place where you don’t have to think too much about what to do and where to go.”

Hidden within Izmir’s Kemeralti bazaar is Synagogues Street and it’s getting a revamp

Premier produce

The Alacati Herb Festival, which aims showcases herbs of the region, is held in the town each spring.

Berkcan Zengin/GocherImagery/MediaPunch/IPX/AP

Alaçatı has been through many reinventions over the years. It was mostly swampland up until the mid-19th century, when authorities had it drained to rid the area of marshes that bred malaria-causing mosquitoes.

Greek workers from nearby islands settled here to farm the vineyards and olive groves, and the region continues to be the premier wine-making and olive-oil producing region in Turkey.

Like other towns in the area, Alaçatı was largely abandoned in the 1920s, after Turkey and Greece engaged in a religion-based population exchange. Many of its stone houses lay discarded for decades and fell into disrepair.

But it has experienced a sea change of sorts in the last 20 years. Eagle-eyed visitors from across the country spotted an opportunity to buy holiday homes here, breathing new life into the town.

Locals often boast about the region’s produce and artisanal food, and on Saturday’s, market stalls heaving with fruit and vegetables, fresh bread, and seafood line the town’s main artery, Kemalpaşa Street and extend up numerous side streets.

Each spring, the Alaçatı Herb Festival, an annual event showcasing the natural herbs that grow in Alaçatı and the nearby coastal town of Çesme, attracts hordes of visitors to the area.

“Everything is so reachable,” adds Akbulut. “Delicious food, fresh vegetables and fruit, natural herbs and seafood.

“It is a good combination of local people and new settlers. You can never feel lonely here.”

‘Heaven on Earth’

Alacati is just a short drive from the beaches along the Cesme peninsula, including the stunning Ilica beach.

Emin Menguarslan/Anadolu Agency/Getty Images

This region of the eastern Mediterranean is steeped with history. The ancient ruins of Ephesus, a UNESCO World Heritage Site, can be found just an hour’s drive to the south of Alacatı.

Based near the Aegean Sea and modern-day Selçuk, it remains one of Christianity’s most revered sites and is home to some of Turkey’s best archaeological wonders including the Temple of Hadrian, built in honor of Roman Emperor Hadrian.

Meanwhile, the thermal pools of Pamukkale, which literally translates to “cotton castle” and can be mistaken for a cluster of angelic clouds if viewed at the right angle, are just a few hours away by car.

The ancient city of Ephesus is undoubtedly on of Turkey’s great wonders. But a short drive away is a small village with a lot of charm.

Tourists have been coming to this popular site of ancient travertines, another UNESCO World Heritage Site, for centuries.

The well-preserved ruins of the Hierapolis ancient city, once a healing center, are among its highlights, along with the Pamukkale Antique Pool, also known as Cleopatra’s Pool, where Egyptian Queen Cleopatra is believed to have once swam.

And for those who’d rather stay closer to town, a trip to the hot thermal springs of Ilica beach, where the water remains shallow for the first 100 meters or so from shore, will not disappoint.

“We call this place ‘Heaven on Earth’,” Çesme Mayor Ekrem Oran tells CNN while admiring the herbs at Alaçatı’s Saturday market.

Judging from the faces of those around, you’d be hard-pressed to find anyone who’d disagree with this sentiment.

Read original article here

72-year-old baffled by his ‘whistling scrotum’ after discovering air was hissing out of his sack

The man with the… whistling scrotum? 72-year-old was baffled by the hissing noise coming from his genitals only to discover it was leaking out of an open wound

  • Elderly man came to A&E with the bizarre injury and with a shortness of breath
  • The air was coming from a hole in his scrotum left behind from previous surgery 
  • US medics found this was actually air escaping from the man’s collapsed lungs  

A man who heard a ‘hissing’ noise coming from his genitals has become the first person in the world to be diagnosed with ‘whistling scrotum’.

The unidentified 72-year-old went to an emergency department in Ohio, complaining about his bizarre malady.

At the same time, he also found himself short of breath and had a swollen face. 

Scans eventually revealed his lungs had collapsed and air was building up inside his body — a dangerous and potentially life-threatening condition. 

Doctors then discovered the source of the man’s scrotal whistling — an open wound on the left side of his scrotum.

The wound, leftover from testicle surgery to reduce swelling five months earlier, was allowing some trapped air to escape.  

Medics described the unusual case in the American Journal of Case Reports, saying they believe it to be the first of its kind.

A CT scan of the 72-year-old man’s chest. The black arrows point to the man’s collapsed lungs where as the white arrows point to air pockets (solid black areas) trapped between the internal space exterior of the lungs and inside the chest wall. Black areas throughout the man’s neck shoulders and arms also show air trapped inside the body

This CT scan of the man’s lower abdomen shows the amount of air (black areas) inside his thighs and scrotum. Medics who reported the case said an open wound in his scrotum acted as an ‘escape route’ for the trapped air  

Motorbike rider, 48, ruptured his TESTICLE after losing control and ploughing into a wall 

An American man was left with a ruptured testicle after he lost control of his motorbike and ploughed into a wall.

The 48-year-old, from New Jersey, went to hospital later that afternoon after his right testicle became so swollen he couldn’t sit down without suffering excruciating pain.

He told doctors that most of the impact from the motorbike crash ‘was to his scrotal region’.

Medics who examined the man’s right testicle found it was unusually hard and caused pain when touched. 

An ultrasound scan found the testicle had ruptured causing blood to pool inside the tissue from the artery that normally fed the reproductive organ. 

Surgeons managed to save his testicle by inserting a special gel-foam into the ruptured blood vessel, only the third time such a procedure has been performed. 

Medics from St. Joseph’s University Medical Center in Paterson reported the case in a medical journal.

<!- - ad: https://mads.dailymail.co.uk/v8/de/health/none/article/other/mpu_factbox.html?id=mpu_factbox_1 - ->

Advertisement

An X-ray of the man’s chest revealed he had ‘excessive’ amounts of air floating within his body which had caused his lungs to collapse. Left untreated, this can start to impact the function of the heart and lungs and become life threatening. 

The complication was also to blame his shortness of breath and swollen face. 

Two plastic tubes were inserted into his chest to help drain the excess air.

He was then transferred to another hospital for further treatment, Dr Brant Bickford and colleagues wrote. 

His condition then worsened, with the amount of air trapped between the lungs and the chest wall growing larger. It prompted medics to insert another chest tube.

But after three days in hospital his lungs had recovered. He was then discharged in a stable condition.

However, he still had air trapped in his scrotum and abdomen for another two years, which was described as an ‘abnormally long time’. Medics did not explain why this phenomenon occurred. 

Doctors were eventually forced to remove both his testicles before the problem was resolved, but it’s not clear why this was done. 

The medical term for air becoming trapped in the scrotum is pneumoscrotum, a rare condition. Just 60 cases have been described in medical literature.

Many of these were caused by penetrating injuries, which can lead to air becoming trapped.

None of the previously reported cases of pneumoscrotum had included a ready-made route for the air to escape. 

Normally this air needs medical intervention to escape as it cannot exit through the body’s orifices. 

Dr Bickford said it was unclear if this ‘escape route’ had actually helped improve the man’s condition or not. 

‘Whether the air escape attenuated the patient’s presentation and led to a more favorable outcome will never be known,’ he said. 

Yet another unusual aspect of the case was how long it took for the air build-up inside his scrotum, perineum and thighs to resolve. 

Medics said the man ‘adamantly denied’ self-injecting air into his scrotum during his recovery and that nothing in his behaviour over a three year follow-up led them to doubt this.

They also did not reveal when exactly the incident took place. 

Read original article here