Internships
Each year, CPPM welcomes more than a dozen trainees in the various research teams of the laboratory. The internships offered by the laboratory can be of several kinds:
-
Bachelor's/Master's level physics internships: they are spontaneous or compulsory and are intended for Bachelor's and Master's level students who have completed a physics course. Specific offers are submitted by the various research teams during the year.
-
Technical internships (BTS, IUT, Engineer): they are generally part of your school curriculum. Precise offers are submitted by the various teams and departments during the year.
-
High school internships: we welcome high school students for observation internships for specific periods of time.
To apply for physics or technical internships, you must attach to your application a CV, a cover letter as well as your last transcript (transcript of the previous year or the last semester of the year current year if available ). For Master internships, recommendation letters from your professors or former internship supervisors may be requested.
Whatever the nature of your internship, a favourable response from one of our laboratory staff is not sufficient to hire you as an internship student. Indeed, only the agreement of the CPPM dirctor and the establishment of a legal agreement between the CPPM and your school/University are the two conditions to formaly welcome you as trainee student at CPPM.
Contacts: William Gillard (Physics Internships), Frédéric Hachon (Technical Internships), Jocelyne Munoz (Administrative Internships)
Internship M2
The data acquisition and trigger electronics of the ATLAS liquid argon calorimeter will be fully replaced as part of the second phase of upgrade of the ATLAS detector. The new backend electronics will be based on high-end FPGAs that will compute on-the-fly the energy deposited in the calorimeter before sending it to the trigger and data acquisition systems. New state-of-the-art algorithms, based on neural networks, are being developed to compute the energy and improve its resolution in the harsh conditions of the HL-LHC.
The candidate is expected to take a role in the development of data processing algorithms allowing to efficiently compute the energies deposited in the LAr calorimeters in the high pileup conditions expected at the HL-LHC. These algorithms will be based on AI techniques such as recurrent neural networks will be adapted to fit on hardware processing units based on high-end FPGAs. The successful candidate will be responsible of designing the AI algorithms, using python and keras, and assessing their performance. The candidate will also assess the effect of employing such algorithms for electromagnetic object reconstruction (especially at trigger level). She/he will work closely with the engineers designing the electronic cards at CPPM in order to adapt the AI algorithm to the specifics of FPGAs. Candidates with a strong interest for hardware will be encouraged to take part in the design of the firmware to program the FPGAs.
Prior knowledge of keras, python and C++ is desirable but not mandatory.
The last piece of the standard model of particle physics, the Higgs boson, was discovered by the ATLAS and CMS collaborations in 2012. The newly discovered boson provides a unique possibility to search for new unknown physics beyond the standard model. The ATLAS group at CPPM had a leading role in detecting and studying the Higgs boson properties in several of its production and decay modes. The group is currently concentrating on the detection of the production of two Higgs bosons or two scalar bosons, a process that was never observed before.
This internship will concentrate on the study of the production of a scalar boson with a Higgs boson decaying to a pair of photons and a pair or b-quarks. The detection of such process is a strong proof of the existence of new physics and of the modification of the electroweak symmetry breaking as described by the standard model. The run 3 of the LHC, currently in operation, will provide enough data (in combination with previous data) to improve the discovery potential of such process. The analysis of the run 3 data is being prepared now by a group of several institutes around the world that collaborate at CERN. The successful candidate will work within this collaboration and will take part of preparing and studying simulation samples that describes this physics process. The candidate will work within a team of three researchers and one PhD student at CPPM. He/she will analyze the kinematic and topological distributions of simulated events in order to validate the simulation and improve the selection of signal events and separate them from background. Prior knowledge of programming language especially C++/root or python is an advantage but is not mandatory.
The predictivity of the Standard Model (SM) of particle physics remains unchallenged by experimental results. After the tantalizing discovery of the Higgs boson at LHC, the measurements of properties such as its mass, spin, parity and its couplings with other SM particles have confirmed its SM-like nature. This goes hand in hand with the absence of direct signs of TeV physics beyond the SM from current direct searches.
Indeed, the excellent performance of the LHC in terms of delivered luminosity allowed the ATLAS and CMS experiments to set stringent limits on new particle masses well beyond the EW scale, thus worsening the naturalness problem. If the new physics scale lies well above the present experimentally probed energies, one would be left with the only experimental perspective of searching for deviations within the LHC precision measurements, and with no solid theoretical explanation of why the new physics should be so unnaturally heavy. There is, however, another logical possibility: new physics may be hidden at lower energies although weakly coupled to the SM known particles, so that its signals could be swamped in the SM background.
To process the enormous amount of data provided by the LHC, ATLAS uses an advanced ?trigger? system to tell the detector which events to record and which to ignore. The ATLAS trigger is a two-level system composed of the first level, Level 1 (Level-1) trigger implemented in custom hardware, and High Level Trigger (HLT) which relies on selections made by algorithms implemented in software. The trigger is designed in such a way that an initial rate of collisions of 40 MHz is decreased to about 100 kHz after L1 and further decreased to 3 kHz at the HLT. This is a harsh limit on the possibility of recording low energetic events, swamped by high rate background, where signal of new physics could be hidden.
The Phase-I ATLAS Level-1 calorimeter trigger consists of a series of upgrades in order to face the challenges posed by the Run 3 LHC luminosity. The trigger upgrade benefits from new front-end electronics for parts of the calorimeter that provide the trigger system with digital data with a tenfold increase in granularity. This makes possible the implementation of more efficient algorithms to maintain the low trigger thresholds at much harsher LHC collision conditions.
The candidate will work on phenomenological aspects, aimed at characterizing the relevant BSM models that can produce low mass signatures and to define a trigger strategy that could maximize the sensitivity of ATLAS searches for this signal in the years to come.
The predictivity of the Standard Model (SM) of particle physics remains unchallenged by experimental results. After the tantalizing discovery of the Higgs boson at LHC, the measurements of properties such as its mass, spin, parity and its couplings with other SM particles have confirmed its SM-like nature. This goes hand in hand with the absence of direct signs of TeV physics beyond the SM from current direct searches.
Indeed, the excellent performance of the LHC in terms of delivered luminosity allowed the ATLAS and CMS experiments to set stringent limits on new particle masses well beyond the EW scale, thus worsening the naturalness problem. If the new physics scale lies well above the present experimentally probed energies, one would be left with the only experimental perspective of searching for deviations within the LHC precision measurements, and with no solid theoretical explanation of why the new physics should be so unnaturally heavy. There is, however, another logical possibility: new physics may be hidden at lower energies although weakly coupled to the SM known particles, so that its signals could be swamped in the SM background.
To process the enormous amount of data provided by the LHC, ATLAS uses an advanced trigger system to tell the detector which events to record and which to ignore. The ATLAS trigger is a two-level system composed of the first level, Level 1 (Level-1) trigger implemented in custom hardware, and High Level Trigger (HLT) which relies on selections made by algorithms implemented in software. The trigger is designed in such a way that an initial rate of collisions of 40 MHz is decreased to about 100 kHz after L1 and further decreased to 3 kHz at the HLT. This is a harsh limit on the possibility of recording low energetic events, swamped by high rate background, where signal of new physics could be hidden.
The Phase-I ATLAS Level-1 calorimeter trigger consists of a series of upgrades in order to face the challenges posed by the Run 3 LHC luminosity. The trigger upgrade benefits from new front-end electronics for parts of the calorimeter that provide the trigger system with digital data with a tenfold increase in granularity. This makes possible the implementation of more efficient algorithms to maintain the low trigger thresholds at much harsher LHC collision conditions.
The candidate will work on the analysis of the LHC data recorded in 2022 and 2023 to assess the quality of the data recorded by the upgraded ATLAS calorimeter system. A work on phenomenological aspects, aimed at characterizing the relevant BSM models that can produce low mass signatures is also foreseen.
ATLAS is one of the four main detectors at the Large Hadron Collider (LHC) and one of the two general-purpose detectors. It has a cylindrical symmetry that covers almost the entire solid angle and the onion-like layers of sub-detectors allow for unambiguous reconstruction of the final state objects from the high energy collisions.
Electrons are fundamental objects that are reconstructed and identified by the ATLAS detector. They are reconstructed from tracks in the inner detector and matching energy deposits in the electromagnetic calorimeter. Ensuring high electron reconstruction efficiency is crucial for the ATLAS physics programs. Therefore, assessing the performance of the electron reconstruction is very important. Additionally, measuring the electron reconstruction efficiency in both data and Monte Carlo (MC) simulations allows deriving correction factors (or Scale Factors) that are used throughout the ATLAS physics analyses to correct the electron reconstruction efficiency in MC simulations to the efficiency observed in data.
The candidate will work closely with the e/gamma team on performing the measurement of electron reconstruction efficiency with the freshly collected data in 2022 and early 2023. Besides the performance assessment, the selected candidate will work on the optimization of the existing code that performs the measurement, as well as work on improving the systematic uncertainties of the measurement.
Good command of Python and C++ programming languages would be an advantage but their prior knowledge is not mandatory.
Being forbidden in the Standard Model (SM) of particle physics,lepton flavor violating decays are among the most powerful
probes to search for physics beyond the SM. In view of the recent anomalies seen by LHCb on tests of lepton flavor
universality in and processes, the interest of tau lepton flavor violating
decays has been greatly reinforced. In particular, several new physics models predict branching
fractions of and just below the current experimental limits.
The Belle II experiment located at KEK, Japan, has started the physics data taking in 2019, and is aiming at
50 times more data than its predecessor. Thanks to its clean environment and high cross section,
it provides an ideal environment to study tau decays. The CPPM group searches for lepton flavour violating decays,
such as or , with V0 being a neutral vector meson and
an electron or muon.
The goal of this internship is to develop and use a Graph Neural Network (GNN) to reject background events.
Other architectures and implementations could be studied as well. The candidate will prepare data for training samples,
get familiar with the GNN, assess its performances and explore various formulation of the problem and different architectures.
The final objective is to use the GNN in the analyses of the channels studied in the group.
This internship can be continued with a PhD thesis.
Application including a CV, grade records and a motivation statement should be sent to giampi@cppm.in2p3.fr and serrano@cppm.in2p3.fr.
References:
https://arxiv.org/abs/1808.10567
https://hflav-eos.web.cern.ch/hflav-eos/tau/spring-2017/lfv-limits-plot.html
https://arxiv.org/abs/1903.11517
https://arxiv.org/pdf/1806.05689.pdf
https://arxiv.org/abs/2208.14924
Dark matter is one of the main puzzles in fundamental physics today. Indeed, its contribution to the total mass of the Universe would be about 85%, but it cannot be explained in the framework of the Standard Model (SM) of particle physics. Several candidates, however, exist in theories beyond the SM, and the WIMP (weakly interacting massive particle) is one of the best motivated, as it allows to also solve the SM hierarchy problem, directly linked to the stability of the Higgs boson mass.
Experiments searching directly for dark matter thus use the dark matter halo of our galaxy as a potential source of WIMPs. Since 2010, the most sensitive detection technology is based on the measurement of the scintillation light emitted when a WIMP scatters off an atom of noble liquid argon or xenon. The DarkSide-20k experiment (DS-20k) is currently being installed 1.4~km underground in the Gran Sasso laboratory in Italy. It will use a time projection chamber (TPC) filled with 50 tons of purified argon and read out by 200,000 silicon photomultipliers (SiPMs). This will allow to have the world leading discovery potential for WIMPs after a few years of data taking. The current work is dedicated to the construction of the detector, and data taking should start in 2027. The increase of the liquid argon volume, compared with previous experiments, will allow DS-20k to have the best sensitivity of all liquid argon detectors with only one month of data.
The goal of this internship is to prepare for data taking. Once light is collected by the SiPMs it must be treated, reconstructed, to extract for instance the location and energy of the collision within the TPC volume. The student will participate in the simulation and improvement of data reconstruction algorithms with innovative machine leaning techniques (e.g., neural networks), to optimize the separation between signal and backgrounds. It is possibly foreseen to also take part in the analysis of calibration data taken with small neutron detectors in December 2023, in order to improve the DS-20k potential. These activities will allow to become familiar with instrumental aspects, software and data analysis in a particle physics experiment.
This project could be continued as part of a PhD thesis (already funded by Agence Nationale de la Recherche): https://www.cppm.in2p3.fr/web/en/jobs/phd/index.html#Doctorat-2427-DS-01
More details about the CPPM Dark Matter team: https://www.cppm.in2p3.fr/web/en/research/particle_physics/#Dark%20Matter
The origin of galactic cosmic rays is one of the main open questions in high energy astrophysics. PeVatrons are objects capable of accelerating particles up to the PeV (=1015 eV) energies and are therefore considered the galactic cosmic ray accelerators. The principal signature of PeVatrons is ultrahigh-energy (exceeding 100 TeV) gamma radiation. The search for PeVatrons has recently been boosted by the discovery of several ultrahigh energy gamma-ray sources by the Large High Altitude Air Shower Observatory (LHAASO) [1].
Recently, the Supernova Remnant (SNR) G106.3-2.7 has been indicated as a highly promising PeVatron candidate [2]. In fact, G106.3-2.7 emits gamma-rays up to 500 TeV from an extended region (~0.2o) well separated from the SNR pulsar (J2229+6114) and in spatial correlation with a local molecular cloud.
The CTA (Cherenkov Telescope Array) is a worldwide project to construct the next generation ground based very high energy gamma ray instrument. CTA will use tens of Imaging Air Cherenkov Telescopes (IACT) of three different sizes (mirror diameter of 4 m, 12 m and 23 m) deployed on two sites, one on each hemisphere (La Palma on the Canary Islands and Paranal in Chile). The observatory will detect gamma-rays with energy ranging from 20 GeV up to 300 TeV by imaging the Cherenkov light emitted from the charged particle shower produced by the interaction of the primary gamma ray in the upper atmosphere [3,4].
The CTA observatory completion is foreseen in 2025 but the first Large-Sized Telescope (LST1) is already installed and taking data in La Palma.
While the LST1 telescope cannot reach enough sensitivity to access energies above 100 TeV, it can provide precise angular resolution data for establishing the spectral morphology of this exciting PeVatron candidate in the 1-50 TeV energy region. Observations of G106.3-2.7 have started in 2022 and are presently on-going.
By using the Instrument Response Functions of the LST1 telescope, that have been recently determined within the CPPM group, the student will simulate the G106.3-2.7 source considering different morphological and spectral models based on recent phenomenological studies. The first goal will be to identify the capability of LST1 in determining the morphology of the source. Subsequently the simulated energy dependent sky maps will be compared to first observations with LST1 and this work will help to guide the data analysis and interpretation of these first early science data.
The student will use the official science tool of CTA (gammapy [5]) written in Python. The candidate should therefore have basic knowledge of Python programming.
This internship can eventually be continued with a PhD thesis.
References:
[1] Cao, Z., Aharonian, F.A., An, Q. et al. Ultrahigh-energy photons up to 1.4 petaelectronvolts from 12 ?-ray Galactic sources. Nature 594, 3336 (2021).
[2] Z. Cao et al. Nature, 594, 3336 (2021); M. Amenomori et al. Nature Astronomy, 5, 460464 (2021)
[3] https://www.cta-observatory.org/
[4] Science with the Cherenkov Telescope Array: https://arxiv.org/abs/1709.07997
[5] https://docs.gammapy.org/1.1/
The CTA (Cherenkov Telescope Array) is a worldwide project to construct the next generation ground based very high energy gamma ray instrument [1]-[2]. CTA will use tens of Imaging Air Cherenkov Telescopes (IACT) of three different sizes (mirror diameter of 4 m, 12 m and 23 m) deployed on two sites, one on each hemisphere (La Palma on the Canary Islands and Paranal in Chile). The observatory will detect gamma-rays with energy ranging from 20 GeV up to 300 TeV by imaging the Cherenkov light emitted from the charge particle shower produced by the interaction of the primary gamma ray in the upper atmosphere.
The unconventional capabilities of CTA will address, among others, the intriguing question of the origin of the very high energy galactic cosmic rays by the search of galactic sources capable of accelerating cosmic rays up to the PeV, called PeVatrons. Recently, the Supernova Remnant (SNR) G106.3-2.7 has been indicated as a highly promising PeVatron candidate [4]. In fact, G106.3-2.7 emits gamma-rays up to 500 TeV from an extended region (~0.2o) well separated from the SNR pulsar and in spatial correlation with a molecular cloud.
The CTA observatory completion is foreseen in 2025 but the first large size telescope (LST1) is already installed and taking data in La Palma. LST1 is placed very close to the two MAGIC telescopes [3], which are one of the presently active IACT experiments. This configuration permits to perform joint observations of the same source with the three telescopes LST1+MAGIC. Joint acquisition not only increases the effective detection area but also improves the energy and angular resolution, thanks to the enhanced stereo data. While the LST1+MAGIC telescopes cannot reach enough sensitivity to access energies above 100 TeV, they can provide exclusive and unprecedented data for establishing the spectral morphology of this exiting PeVatron candidate in the 100 GeV-100 TeV energy range. A campaign of joint observations of G106.3-2.7 will start in 2022.
This internship concerns the setup of the reconstruction chain for G106.3-2.7 on the base of Monte Carlo data. In order to maximise the effective area at very high energy, G106.3-2.7 observation will be performed at large zenith angle (62o-70o), which is a challenging detection condition and asks to perform a preliminary verification of the detection performances. The student will first estimate the expected Instrument Response Function with the standard LST1 reconstruction [5], using the mono telescope approach. Then, s/he will estimate the performance of the MAGIC + LST1 stereo reconstruction with the joint reconstruction pipeline [6], presently under development. Eventually, the student will simulate the signal expected from the source with the two configurations.
The candidate needs a medium knowledge of the python programming language.
Candidates should send their CV and motivation letter as well as grades (Licence, M1 as well as their M2 if available) to cassol@cppm.in2p3.fr
A PhD contract can eventually follow the internship, it will be centred on the analysis of real data from G106.3-2.7, acquired in 2022 and the following years.
References:
[1] Science with the Cherenkov Telescope Array: https://arxiv.org/abs/1709.07997;
[2] https://www.cta-observatory.org/
[3] MAGIC Collaboration, Aleksi?, J. et al. Astropart. Phys. 72 (2016) 7694.
[4] Z. Cao et al. Nature, 594, 3336 (2021); M. Amenomori et al. Nature Astronomy, 5, 460464 (2021)
[5] https://github.com/cta-observatory/cta-lstchain
[6] https://github.com/cta-observatory/magic-cta-pipe
KM3NeT/ORCA (Oscillation Research with Cosmics in the Abyss) is a deep sea
neutrino telescope currently under construction at a depth of 2500m in the
Mediterranean Sea off the coast of Toulon. ORCA is optimised for the detection of
low energy (3-100 GeV) atmospheric neutrinos and will allow precision studies
of neutrino oscillation properties. ORCA is part of the multi-site KM3NeT research infrastructure, which also incorporates a second telescope array (in Sicily) optimised for the detection of high-energy cosmic neutrinos.
The first ORCA detection strings have been operating for more than a year and are providing high quality data. During this stage the student will apply machine learning techniques to the data analysis with the aim to improve the angular and energy resolutions of the current event reconstruction algorithms. It is expected the candidate will follow this stage with a PhD on measuring the neutrino oscillation parameters.
Links:
http://www.cppm.in2p3.fr/rubrique.php3?id_rubrique=259
Doing neutrino astronomy is a long quest in astroparticle physics. IceCube and ANTARES have found the first evidences of a few neutrino sources, mainly related to blazars (active galactic nuclei with their jets posting toward the Earth) and tidal disruption events. Most of those explosive events can release enormous amounts of energy both in electromagnetic radiation and in non-electromagnetic forms such as neutrinos and gravitational waves. This is at the basis of multi-messenger astronomy.
KM3NeT, the second generation neutrino detectors in the Mediterranean Sea, is in construction. It is taking data with a sensitivity much larger than ANTARES in the whole energy range, from GeV to PeV thanks to the complementarity of the 2 detectors: ORCA and ARCA. Already with the 30-40 detection units in operation, KM3NeT has significant better performances, either in term of effective area or in term of angular resolution.
In CPPM, we are mainly working on the implementation of multi-messenger analyses with high-energy neutrinos detected with ANTARES and KM3NeT neutrino telescopes. In this context, we are developing an analysis framework that is able to receive and process a time and spatial correlation analysis with high-energy neutrinos in coincidence with selected potential external triggers. Those analyses can be performed in real-time or offline including the most refined knowledge of the detector. In the last years, IceCube has provided alerts from selected high-energy neutrinos, and for some of them, a bright blazar has been located in the error box of the neutrino and found in active state with concomitant multi-wavelength observations.
During this internship, the student will perform an optimised analysis of the KM3NeT data for those interesting associations. It consists of the development of a neutrino selection based on the outputs of the event reconstructions and the event topology classifiers. This selection can be made with machine learning tools. After the event selection, the student will implement the correlation analysis.
This internship can be continued with a PhD in our group on the multi-messenger analyses with KM3NeT.
The analyses will be performed using C++ or python.
KM3NeT/ORCA (Oscillation Research with Cosmics in the Abyss) is a deep sea neutrino telescope currently under construction at a depth of 2500m in the Mediterranean Sea off the coast of Toulon. ORCA is optimised for the detection of atmospheric neutrinos in the energy range 3-100 GeV and will allow precision studies of neutrino properties. Currently the detector takes data with 11 detection strings hosting more than 6000 photomultiplier tubes.
During this internship at the Centre de Physique des Particules de Marseille, the student will analyse data taking with the ORCA detector in the period 2020 to 2022. The goal is to obtain detector response functions for neutrinos in the relevant energy range and to derive sensitivities for the observation of sterile neutrinos. Software tools which have been developed at CPPM will be used.
Links:
http://www.cppm.in2p3.f/rubrique.php3?id_rubrique=259
The Neutrino Team at CPPM is strongly involved in the KM3NeT/ORCA neutrino telescope, under construction in the abyss (-2500m) of the Mediterranean sea, 40km offshore Toulon. The first detection units that have been deployed are successfully collecting data. The detector is now large enough to access yet unexplored physics territories. A very exciting topic is the search for tau neutrinos appearing in the neutrino flux created in the collisions of cosmic rays in the atmosphere. The appearance probability is poorly known and KM3NeT/ORCA has a unique potential to measure it. Such measurements could lead to a major discovery regarding the existence of sterile neutrinos.
One of the keystones for these studies is the tag of the neutrino flavours (electron, muon, or tau); hence, in this project, the student will develop Machine Learning algorithms to perform this kind of identifications. The expected skills are to master the basics of neutrino oscillation and to program in python, c++, ROOT.
Doing neutrino astronomy is a long dream in astroparticle physics. IceCube and ANTARES have found the first evidences of a few neutrino sources, mainly related to blazars (active galactic nuclei with their jets posting toward the Earth) and tidal disruption events. For most of those explosive events can release enormous amounts of energy both in electromagnetic radiation and in non-electromagnetic forms such as neutrinos and gravitational waves. This is at the basis of multi-messenger astronomy. KM3NeT, the second generation neutrino detectors in the Mediterranean Sea, will have significant better performances, either in term of effective area or in term of angular resolution.
In CPPM, we are mainly working on the development of multi-messenger analyses with high-energy neutrinos detected with ANTARES and KM3NeT neutrino telescopes. In this context, we are developing a real-time analysis framework that is able to send neutrino alerts and to receive and process a cross-match analysis with high-energy neutrinos in coincidence with selected potential external triggers.
During this intern ship, the student will implement a neutrino selection module that takes in inputs the reconstructed and classified neutrino streams. To reach a sustainable false alert rate (1-2 per month), It will be necessary to filter on the topology of the events, the multiplicity, the energy and the estimate of the reconstruction error. The student will have to implement such module based on machine learning tool.
The analyses will be performed using C++ or python.
Description
Deadline to apply: December 30th
Scientific Context
According to modern physics, matter should not have emerged from the Big Bang and its origin remains one of the most profound riddles in fundamental physics. The heart of this mystery is the CP symmetry, i.e. the fact that the laws of physics are the same for matter and anti-matter. During the Big Bang, this symmetry should have maintained particles and anti-particles in equal quantities while they were gradually annihilating each other leaving, at the end, nothing but pure energy. The existence of matter thus requires the CP symmetry to be violated, which is one of the so-called Sakharov conditions (Sakharov, 1967).
The works awarded by the 2015 Nobel Prize imply that neutrino physics can break this symmetry via a quantum phenomenon called neutrino oscillations. Neutrinos can be produced in three types or flavors the electron neutrino ( ), the muon neutrino ( ) and the tau neutrino ( ). Experimental evidence showed that the flavor of neutrinos oscillates when they propagate. In theory, this oscillation can be different for neutrinos and anti-neutrinos which would break the CP symmetry. Discovering such an effect would be a major breakthrough in fundamental physics. Intense experimental efforts (Acciarri at al., 2015; Abe et al., 2018) are thus ongoing worldwide to study the neutrino oscillations. However, the experimental techniques used so far are reaching their limits (Branca, et al., 2021). New techniques are thus urgently needed. The goal of this Master 2 project is to design such a new technique: the neutrino tagging (Perrin-Terrin, 2022).
Neutrino Tagging
At accelerator-based experiments, neutrinos are produced by colliding protons on a target. These collisions produce secondary particles, in particular, and , which decay as and and so produce a and anti- beam. The optimal propagation distance to observe the neutrino oscillations depends on the neutrino energy and ranges between 100~km to 1000~km. A neutrino detector consisting of an instrumented target is installed at this distance to measure the neutrino flavor and energy. Conventionally, the neutrino characteristics are measured from their interaction in a densely instrumented detector, as shown in Fig 1-(left). The complexity of the interaction mechanisms induces strong limitations on the precisions of the energy measurements.

The tagging technique (Perrin-Terrin, 2022) proposes to determine the neutrino characteristics using the production mechanisms, as shown in Fig 1-(right). These mechanisms, the decays, are extremely simple processes. Hence, once the and characteristics (time, momentum, charge: t,p,±) are measured, a simple kinematical relation allows to derive precisely the neutrino characteristics. For example, while the precision of the neutrino energy measurement based on the interaction plateaus at about 15%, the one of the tagging can easily reach 1%. In these conditions, the only task left to the neutrino detector is to identify the flavor of the neutrino after propagation. These relaxed requirements allow to use seawater neutrino detectors such as KM3NeT/ORCA (Adrian-Martinez et al., 2016) under construction at a depth of 2450~m offshore Toulon. These detectors are extremely large (several Mton) which increases the probability for neutrinos to interact in them. The key element of the technique is the tagger, i.e. the detector based on which the neutrino properties are estimated. The tagger will be composed of several planes of cutting-edge high time precision silicon pixel detectors (Lai, 2018) able to sustain the extremely high rate of particles in the beam line: 1012 per second! A proof-of-concept of this technique is being performed using the NA62 experiment (Cortina Gil at al., 2017) at CERN as a miniature neutrino experiment (Martino, 2022).
Objectives of the Project
The project aims to co-design the detector layout (time resolution, number of planes and location etc.) and the algorithms to estimate the neutrino characteristics. Achieving this objective will require to:
- design a statistical model describing the physical setup,
- develop a simulation of the detector,
- calculate the optimal performance bounds,
- optimize the detector setup based on the calculated bounds,
- test/apply the model and bounds to NA62.
Working Environment
The student will be based at IM2NP, located at the La Garde campus in Toulon, or/and at CPPM located in the Luminy campus in Marseille, at the entrance of the Parc National des Calanques. Both places are located near remarkable natural sites and offer pleasant working and living conditions.
The student will integrate the Signal And Tracking (STr) Team of IM2NP and the Neutrino team of CPPM. The STr team is composed of ~10 people who are specialized in applied statistics and signal processing for different domains (astrophysics, optics, RADAR, SONAR, LIFI, etc. (Roueff, Arnaubec, Dubois-Fernandez, Refregier, 2011; Roueff, et al., 2020; Roueff, Roux, Réfrégier, 2009)). The team at CPPM is composed of ~30 people, researchers, postdocs, PhD students, engineers and technicians with a large panel of skills and working on the construction and exploitation of the KM3NeT detectors. A PhD student is also working on the proof-of-principle of the neutrino tagging technique at NA62.
The student will be involved in a dynamic international team. The project will be developed in close collaboration with CERN. Indeed, the tagger studied in this project is meant to be installed in the neutrino beam line that CERN has started to study to for the tagging. Regular video-conference meetings which CERN collaborators will be held during the project, and short trips to CERN for in-person meetings can also be envisaged.
The last step of the project (application and test at NA62) will be done in collaboration with several members of the NA62 experiment at University of Birmingham (UK), Ecole Fédérale Polytechnique de Lausanne (Switzerland) and in Université Catholique de Louvain (Belgium) who are all actively contributing to the proof-of-principle of the neutrino tagging technique.
Student Profile
We are seeking for a highly motivated student who could consider continuing this work for a PhD thesis. The student should ideally have:
- basic knowledge of experimental particle physics,
- skills in applied statistics,
- previous experience in machine learning techniques using c/c++, matlab or python,
- oral and written proficiency in English and French.
Application Procedure
The student interested in applying for the internship must provide:
- CV
- Motivation letter
- Grades from M1
- Available grades from M2
- Desired internship duration
- Desired internship starting date (optional)
- Reference letter or reference contact (optional)
This information should be sent to antoine.roueff@univ-tln.fr and mathieu.perrin-terrin@cern.ch before December 30th. Interviews will be conducted beginning of January.
PhD Perspectives
Beyond this Master Project, the student could then enroll in a PhD thesis in the context of the KM3NeT experiment, the NA62 experiment and the neutrino tagging studies at CERN.
Bibliography
Abe, K., at al. (2018, May). Hyper-Kamiokande Design Report. Hyper-Kamiokande Design Report.
Acciarri, R., at al. (2015). Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE).
Adrian-Martinez, S., at al. (2016). Letter of intent for KM3NeT 2.0. J. Phys., G43, 084001. doi:10.1088/0954-3899/43/8/084001
Branca, A., Brunetti, G., Longhin, A., Martini, M., Pupilli, F., Terranova, F. (2021). A New Generation of Neutrino Cross Section Experiments: Challenges and Opportunities. Symmetry, 13, 1625. doi:10.3390/sym13091625
Cortina Gil, E., at al. (2017). The Beam and detector of the NA62 experiment at CERN. JINST, 12, P05025. doi:10.1088/1748-0221/12/05/P05025
Lai, A. (2018). A System Approach towards Future Trackers at High Luminosity Colliders: the TIMESPOT Project. (pp. 13). Sydney: IEEE. doi:10.1109/NSSMIC.2018.8824310
Martino, B. D. (2022, July). Tagged Neutrino Beams. Tagged Neutrino Beams. Zenodo. doi:10.5281/zenodo.6785370
Perrin-Terrin, M. (2022). Neutrino tagging: a new tool for accelerator based neutrino experiments. Eur. Phys. J. C, 82, 465. doi:10.1140/epjc/s10052-022-10397-8
Roueff, A., Arnaubec, A., Dubois-Fernandez, P. C., Refregier, P. (2011). CramerRao Lower Bound Analysis of Vegetation Height Estimation With Random Volume Over Ground Model and Polarimetric SAR Interferometry. IEEE Geoscience and Remote Sensing Letters, 8(6), 11151119. doi:10.1109/LGRS.2011.2157891
Roueff, A., Gerin, M., Gratier, P., Levrier, F., Pety, J., Gaudel, M., . . . Sievers, A. (2020, May). C18O, 13CO, and 12CO abundances and excitation temperatures in the Orion B molecular cloud: An analysis of the precision achievable when modeling spectral line within the Local Thermodynamic Equilibrium approximation. AA 645, A26 (2021). doi:10.1051/0004-6361/202037776
Roueff, A., Roux, P., Réfrégier, P. (2009, April). Wave separation in ambient seismic noise using intrinsic coherence and polarization filtering. Signal Processing, 89, 410421. doi:10.1016/j.sigpro.2008.09.008
Sakharov, A. D. (1967). Violation of CP Invariance, C asymmetry, and baryon asymmetry of the universe. Pisma Zh. Eksp. Teor. Fiz., 5, 3235. doi:10.1070/PU1991v034n05ABEH002497
In the late 90s, measurements of the distance of Supernovae and the redshift of their host galaxies revealed that the expansion of the Universe was accelerating. More than 20 years after this discovery, the nature of the dark energy at the origin of this phenomenon remains unknown.
The CDM concordance model describes a homogeneous, isotropic Universe on large scales, subject to the laws of general relativity (GR). In this model, most of the Universe's energy content comes from cold dark matter and dark energy, introduced as a cosmological constant. The latter behaves like a perfect fluid with negative pressure p, equation of state p = - rho, where rho is the energy density.
Some alternative models (see [1] for a review) introduce scalar fields (quintessence) whose evolution is responsible for the accelerated expansion. These scalar fields can vary in time and space. They can therefore have a time-dependent equation of state and generate anisotropic expansion.
Other models propose to modify the law of gravitation on large scales, mimicking the role of dark energy.
Supernovae remain one of the most accurate probes of the Universe's expansion and homogeneity. In addition, part of the redshift of galaxies is due to a Doppler effect caused by their particular velocities. We can then use supernovae to reconstruct the velocity field on large scales, and measure the growth rate of cosmic structures. This will enable us to test the law of gravitation.
An anisotropy of expansion on large scales, a modification of GR, or an evolution of the equation of state for Dark Energy, would all be revolutionary observations that would challenge our current model.
Until now, supernova surveys have gathered data from multiple telescopes, complicating their statistical analysis. Surveys by the Zwicky Tansient Facility (ZTF: https://www.ztf.caltech.edu/) and the Vera Rubin/LSST Observatory (https://www.lsst.org/) will change all that. They cover the entire sky and accurately measure the distance to tens (hundreds) of thousands of nearby (distant) supernovae.
The CPPM has been working on ZTF data since 2021 and has been involved in the construction and implementation of LSST for years, preparing for the arrival of data in 2025.
Within the group, we are working on the photometric calibration of the ZTF survey, essential for the measurement precision we need (see ubercalibration [2,3]). A recent PhD student has developed a pipeline to simulate ZTF and measure the growth rate of structures ([4]), and a current PhD student is adapting this exercise to LSST. In addition, a post-doc has just joined the group to work on ZTF, and a Chair of Excellence (DARKUNI, see Julian Bautista's internship/thesis) is extending this work by combining these data with spectroscopic data from DESI.
The aim of the internship is to develop this analysis pipeline for measuring the growth rate of structures.
Other aspects such as including the study of the homogeneity of the expansion in this framework could be studied.
This is an observational cosmology internship, for a candidate interested in cosmology and data analysis.
[1] https://arxiv.org/abs/1601.06133
[2] https://arxiv.org/abs/astro-ph/0703454v2
[3] https://arxiv.org/abs/1201.2208v2
[4] https://arxiv.org/abs/2303.01198 https://snsim.readthedocs.io/
Twenty years after the discovery of the current acceleration of the expansion of the universe by supernova measurements, the supernova probe remains the most accurate way to measure the parameters of this recent period in the history of our universe dominated by the so-called dark energy.
The precision measurements that can be performed by the supenova probe will be a crucial element that, in combination with other probes (LSS, weak lenses, CMB, etc.), will put strong constraints on the nature of dark energy. This will be made possible by the exceptional Supernova data set to be provided by LSST, with a combination of huge statistics and extreme calibration accuracy.
The Rubin observatory with the Large Survey of Space and Time (Rubin/LSST) project will be commissioned in 2022 and will run at full speed by the end of 2023. It is an 8.4-metre telescope with a 3.2 billion pixel camera, the most powerful ever built.
This telescope will take a picture of half the sky every three nights for ten years. This survey will make it possible to measure billions of galaxies with great accuracy and to track the variation over time of all transient objects. With many other astrophysical studies, it will be a very powerful machine for determining cosmological parameters using many different probes and, in particular, it will impose strong constraints on the nature of dark energy. The LSST project aims to discover up to half a million supernovae. This two to three orders of magnitude improvement in statistics over the current data set will allow accurate testing of dark energy parameters and will also impose new constraints on the universe's isotropy.
In this Master 2 internship we propose to prepare the first analysis of LSST supernova data by performing an analysis using LSST software and our deep learning method for identifying supernova on existing HSC/Subsaru data. Indeed, the HSC data has characteristics that are very close to what we expect with Rubin/LSST. The CPPM LSST group is already engaged in precision photometry work for LSST with direct involvement in algorithm validation within DESC/LSST [1][2][3] and has proposed a new deep learning method to improve the photometric identification of supernovae [4] and photometric redshifts [5].
[1] https://www.lsst.org/content/lsst-science-drivers-reference-design-and-anticipated-data-products
[2] https://arxiv.org/abs/1211.0310
[3] https://www.lsst.org/about/dm
[4] https://arxiv.org/abs/1901.01298
[5] https://arxiv.org/abs/1806.06607
[6] https://arxiv.org/abs/1401.4064
Twenty years after the discovery of the accelerating expansion of the universe through supernova measurements, the supernova probe remains one of the most accurate means of measuring the cosmological parameters of this recent period in the history of our universe, dominated by the so-called dark energy.
The Rubin Observatory with the Large Survey of Space and Time (Rubin/LSST) will be commissioned in 2024 and will be fully operational by mid-2025. It is an 8.4-m telescope equipped with a 3.2-billion-pixel camera, the most powerful ever built.
This telescope will take a picture of half the sky every three nights for ten years. This survey will make it possible to measure billions of galaxies with great precision, and to track the variation over time of all transient objects. Together with many other astrophysical studies, it will be a very powerful machine for determining cosmological parameters using many different probes and, in particular, will impose strong constraints on the nature of dark energy. The LSST project aims to discover up to half a million supernovae. This improvement of two to three statistical orders of magnitude over the current data set will enable precise testing of the parameters of dark energy, test general relativity and also impose new constraints on the isotropy of the universe.
In this Master 2 internship, we propose to prepare the analysis of the first LSST supernova data by performing an analysis using LSST software and our deep learning method for supernova identification on existing HSC/Subsaru data. Indeed, the HSC data have characteristics very close to those we expect from Rubin/LSST. The LSST group at CPPM is already involved in precision photometry for LSST, with direct involvement in algorithm validation within DESC/LSST [1][2][3], and has proposed a new deep learning method to improve photometric supernova identification [4] and photometric redshifts [5].
[1] https://www.lsst.org/content/lsst-science-drivers-reference-design-and-anticipated-data-products
[2] https://arxiv.org/abs/1211.0310
[3] https://www.lsst.org/about/dm
[4] https://arxiv.org/abs/1901.01298
[5] https://arxiv.org/abs/1806.06607
[6] https://arxiv.org/abs/1401.4064
Future galaxy surveys such as the Euclid satellite (Laureijs et al. 2011) will constrain cosmological model using the measurement of the distortion of the shape of background galaxies due to gravitational lensing. Indeed, while travelling to us photons are following the geodesics of the underlying gravitational field, which results in an apparent alignment of galaxies around gravitational potential wells. Measuring this alignement has allowed cosmologist to put constraints on parameters that describe our universe. However when measuring galaxy alignement due to gravitational lensing, it is important to take into account the intrinsic alignment of galaxies due to the fact that galaxies evolve locally in alignment with the potential well surrounding them. This correlation between galaxy shape and their environment, which represents a non-negligible systematic error on the overall weak lensing signal, has been measured around over-densities and recently also around under-densities (d'Assignies et al. 2022). Moreover recent works (Taruya et al. 2020) have also shown that such signal could also provide additional information to constrain cosmological parameters. In the context of the Euclid satellite, that will measure the shape of millions of galaxies in the coming years, we propose an internship aimed to evaluate the impact of intrinsic alignment of galaxies on the void lensing signal using realistic Euclid simulations.
The context: More than twenty years after the discovery of the accelerated nature of the Universe's expansion, there is still no definitive explanation for its physical origin. Several types of dark energy or even alternatives/extensions to general relativity have been proposed in the literature attempting to explain the acceleration of the expansion. By accurately measuring of both the expansion rate of the Universe as well as the growth rate of structures as a function of cosmic time, we can learn more about this cosmological mystery. Particularly at low redshift when the expansion is accelerated and dark energy dominates the expansion, we are interested in obtaining the best constraints on the growth rate of structures. These measurements can be achieved by combining galaxy positions and their velocities. The statistical properties of the density and velocity field are tightly connected to the underlying cosmological model.
Experiments: Measurements of the expansion and growth rates of the Universe are the main scientific goal of current and future experiments such as the Dark Energy Spectroscopic Instrument (DESI), the Zwicky Transient Facility (ZTF), Euclid and the Vera Rubin Observatory Legacy Survey of Space and Time (Rubin-LSST).
DESI is currently measuring the 40 million galaxy positions (with their redshift) and their lower redshift sample will be the most complete to date.
The ZTF survey will discover more than 6000 type-Ia supernovae, from which we can derive galaxy velocities. Rubin-LSST will increase this number to the hundreds of thousands.
Goal of internship: The selected candidate will work towards the joint analysis of DESI and ZTF datasets, which contain millions of galaxies and thousands of type-Ia supernovae. The candidate will get familiarised with the physics and the statistics of galaxy clustering, will code their own analysis pipeline, test it on state-of-the-art simulations, and hopefully apply it on real data. This internship is mean to be a preparation for a PhD thesis on the same topic, if scholarship is available.
Profile required: The candidate has to have large interest by cosmology, statistics, data analysis and programming (we use mostly python). English proficiency and team work skills are also required.
The Euclid mission (http://www.euclid-ec.org) is a major project by ESA that launched a space telescope dedicated to understanding the Universe in July 2023. Through a survey of the entire sky, it will provide a 3D-mapping of galaxies with unprecedented precision. These measurements of the distant Universe large structures will test the cosmological model, particularly questioning the nature of dark energy. The mapping will be achieved using the NISP spectrophotometer and its 16 infrared detectors, which were calibrated on ground by CPPM, a fundamental step to validate the instrument's performance.
Main Activity
NISP's infrared detectors were specifically developed for the Euclid mission. At the cutting edge of technology, each one consists of a matrix of 2048 x 2048 pixels. Their fine calibration was carried out at CPPM, resulting in the recording of 500 Terabytes of data to be analyzed. These data clearly show the presence of persistence that contaminates the data during several hours of acquisition. In order to gain a better understanding of this phenomenon, the intern will work on characterizing the persistence and understanding the influence of environmental parameters on it.
Thus, the intern will need to apply classical or more sophisticated analysis methods in the following steps:
Implement the methods in Python.
Extract parameters such as time constants and amplitudes from the existing calibration data.
Analyze correlations between persistence and environmental data.
Conduct a similar study on a detector of a different technology and compare the results.
Required Knowledge
Strong foundation in programming in Python
Good knowledge of signal processing
Good knowledge of semiconductor physics
The 6-month internship will be conventionally recognized and paid. It may be lead to Ph.D. work with CNES funding (application in progress).
\bf{Contact~:} CV + cover letter
Aurélia Secroun, Research Engineer
Tel : 04 91 82 72 15 mail : secroun@cppm.in2p3.fr
The ?-cold dark matter (?CDM) model is currently the standard cosmological
model used to describe the geometric and dynamic properties of our Universe.
It is based on Einstein's theory of General Relativity (GR), necessary for mod-
eling gravity, the only relevant interaction operating on the largest scales. GR
has successfully passed numerous tests in our local Universe and has never been
falsified. However, several alternative gravity theories remain compatible with
observational data sets from cosmological probes and may offer a physical ex-
planation for the accelerated expansion of the Universe without invoking any
new dark energy components. One way to test GR on megaparsec scales is through the study of gravitational redshifts in galaxy clusters. This refers to the change in the mean observed redshift of cluster member galaxies caused by the gravitational potential encountered by light along its path. However, given the weak nature of this effect, it is necessary to stack a large number of galaxy clusters to enhance the signal-to-noise ratio. Specifically, our plan involves measuring the gravitational redshift effect from stacked cluster profiles, utilizing the peculiar velocity distributions of the cluster member galaxies as probes.
Gravitational redshift measurements have already been conducted by Wojtak et al. 2011, Jimeno et al. 2015 and Rosselli et al. 2023. However, the uncertainties in these measurements are still too large to statistically distinguish among different gravity theories. Additionally, these studies relied on several key assumptions that should be systematically validated against simulations.
This project aims to construct accurate mock catalogues to rigorously test
state-of-the-art numerical algorithms for measuring gravitational redshifts in
galaxy clusters. The goal is to investigate all possible systematic uncertainties and develop effective mitigation strategies. Furthermore, these simulations will be instrumental in providing forecasts for upcoming missions, such as the European Space Agency(ESA) mission Euclid, the Dark Energy Spectroscopic Instrument (DESI) and the Rubin/LSST (LSST).
Contexte :
La mission Euclid (http://www.euclid-ec.org) est un projet majeur de l'ESA qui lancera en 2023 un télescope spatial dédié à la compréhension de l'Univers et réalisera une cartographie de tout le ciel. D'une précision jamais atteinte auparavant, ces mesures des grandes structures de l'Univers lointain permettront de tester le modèle cosmologique et en particulier de questionner la nature de l'énergie noire. La cartographie sera obtenue grâce au spectrophotomètre NISP et les 16 détecteurs infrarouges de son plan focal dont le CPPM a réalisé la calibration au sol, étape fondamentale pour valider les performances de l'instrument.
Activité principale :
Les détecteurs infrarouges du NISP ont été développés expressément pour la mission Euclid. À la pointe de la technologie, chacun est constitué d'une matrice de 2048 x 2048 pixels. Leur calibration fine a été réalisée au CPPM et a donné lieu à l'enregistrement de 500 To de données à analyser. Ces données montrent clairement la présence de persistance qui vient polluer les données pendant plusieurs heures d'acquisition. Afin d'acquérir une meilleure compréhension du phénomène, l'ingénieur/l'ingénieure-stagiaire cherchera à caractériser la persistance et à comprendre l'influence des paramètres environnementaux sur celle-ci.
Pour ce faire, l'ingénieur/l'ingénieure-stagiaire devra~appliquer les méthodes d'analyse classiques (fit par des fonctions plus ou moins simples) suivant les étapes :
- Implémenter les méthodes choisies en python
- Extraire des grandeurs comme constante de temps et amplitude à partir des données de calibration existantes
- Analyser les corrélations entre la persistance et les données environnementales
Connaissances requises :
- Base solide en programmation en langage python
- Bonnes connaissances en traitement du signal
- Connaissances en physique du semi-conducteur
Although the universe is well described by the concordance model ?CDM, the nature of its components, dark matter and dark energy, remains a major puzzle of modern cosmology. While historically most attention has been paid to the overdense regions, the underdense regions account for about 80 per cent of the total volume of the observable Universe and strongly influence the growth of large-scale structure. As voids are nearly devoid of matter, they have proved to be very promising objects for exploring the imprint of possible modifications of General Relativity (GR) such as f(R) gravity or extended gravity theories.
The RENOIR cosmology team at CPPM focuses on the understanding of the history and composition of our Universe, particularly on its dark components. The team is particularly involved in large spectroscopic surveys Dark Energy Spectroscopic Instrument at Mayall, US and the European space mission Euclid, that will provide the observation of 40 million of galaxies, the largest 3D map of the Universe ever made.
A promising way to probe modified gravity models is to constrain the growth of structure of the Universe using information from Redshift Space Distortions around cosmic voids. The aim of the internship is to test and quantify the importance of reconstruction methods which aims to separate peculiar velocity components from the Hubble flow in redshift space, and see the impact on the construction of void catalogs.
This subject can be pursued by a thesis on the extraction of cosmological constraints using Alcock-Paczynski deformation information and RSD information around voids, with DESI data which started its observations in June 2021 for 5 years, and the Euclid mission that will be launched in July 2023.
The imXgam research team conducts interdisciplinary research activities for imaging applications of ionizing radiation in the health and energy fields. It participates in the PGTI (Prompt Gamma Time Imaging) project, whose objective is to reduce the uncertainties related to the path of protons during proton therapy treatments through the development of a detector for time-of-flight imaging of prompt gamma rays (PGs) created during irradiation. This project is based on the development of the TIARA (Time-of-flight Imaging Array) detector.
The accuracy of proton therapy is currently limited by the uncertainties related to the proton path, which result from the composition of the patient's tissues, physiological movements or transient changes in the anatomy and which lead to the use of large safety margins (up to 1 cm) to avoid irradiating healthy tissues. The purpose of PG imaging is to allow real-time control of tumor treatment [1]. To fully exploit its potential, an innovative real-time treatment control detector based on time-of-flight imaging of PGs with a temporal resolution of 100 ps is proposed [2]. This detector consists of a set of lead fluoride Cherenkov converters of about 1 cm3 each surrounding the irradiated volume and read in coincidence with a beam monitor. The principle consists in measuring precisely (better than 100 ps) the time difference between the time of passage of the protons in the beam monitor based on a diamond detector and the time of arrival of the PGs in the Cherenkov converters, which corresponds to the time-of-flight of the proton between its passage in the beam monitor until its interaction in the tissues followed by the time-of-flight of the PG emitted during this interaction until its detection by TIARA. This time difference, knowing the position of the TIARA detectors, constrains the coordinates of the point of emission of the PGs, which allows a 3D reconstruction of the proton path in real time with a millimeter precision [3].
A 3D reconstruction algorithm of the proton path in real time specific to the TIARA detector and its physics has been developed and validated on analytical and Monte Carlo data. The objective of this internship is to develop and study the performance of machine and/or deep learning methods for this problem. The first part will involve building a substantial database by exploiting the modelling of the PGTI/TIARA experiment on the GEANT4 Monte Carlo simulation platform, which has already been carried out. This data should correspond to a realistic benchmark. In a second part, this database will be used to train neural networks capable of estimating the energy deposited in the patient on the basis of time of flight. The robustness and accuracy of these neural networks will be assessed and compared with an analytical algorithm that has already been developed.
These developments will mainly use Python and GATE/Geant4 languages.
Candidates are invited to contact the person in charge of the thesis subject by attaching a CV with a letter of motivation and the last transcript (the one of the previous year and the one of the current semester, if available).
[1] J Krimmer et al., Prompt-gamma monitoring in hadrontherapy: A review, Nucl. Instrum. Methods A 878 (2018) 58-73
[2] S. Marcatili et al., Ultra-fast prompt gamma detection in single proton counting regime for range monitoring in particle therapy, Phys. Med. Biol. 65 (2020) 45033
[3] M. Jacquet et al., A time-of-flight-based reconstruction for real-time prompt-gamma imaging in protontherapy, Phys. Med. Biol. 66 (2021) 135003
The imXgam research team conducts interdisciplinary research activities for imaging applications of ionizing radiation in the health and energy fields. It participates in the TIARA (Time-of-flight Imaging Array) project, whose objective is to reduce the uncertainties related to the path of protons during proton therapy treatments through the development of a detector for time-of-flight imaging of prompt gamma rays (PGs) created during irradiation.
The accuracy of proton therapy is currently limited by the uncertainties related to the proton path, which result from the composition of the patient's tissues, physiological movements or transient changes in the anatomy and which lead to the use of large safety margins (up to 1 cm) to avoid irradiating healthy tissues. The purpose of PG imaging is to allow real-time control of tumor treatment [1]. To fully exploit its potential, an innovative real-time treatment control detector based on time-of-flight imaging of PGs with a temporal resolution of 100 ps is proposed [2]. This detector consists of a set of lead fluoride Cherenkov converters of about 1 cm3 each surrounding the irradiated volume and read in coincidence with a beam monitor. The principle consists in measuring precisely (better than 100 ps) the time difference between the time of passage of the protons in the beam monitor based on a diamond detector and the time of arrival of the PGs in the Cherenkov converters, which corresponds to the time-of-flight of the proton between its passage in the beam monitor until its interaction in the tissues followed by the time-of-flight of the PG emitted during this interaction until its detection by TIARA. This time difference, knowing the position of the TIARA detectors, constrains the coordinates of the point of emission of the PGs, which allows a 3D reconstruction of the proton path in real time with a millimeter precision [3].
A 3D reconstruction algorithm of the proton path in real time specific to the TIARA detector and its physics has been developed and validated on analytical data. The objective of this internship is to evaluate the performance of this algorithm on more realistic data obtained by Monte-Carlo simulation. In a first part, the TIARA experiment will be modelled on the Monte-Carlo simulation platform GATE and data corresponding to a realistic benchmark will be generated. In a second part, the robustness and accuracy of the 3D reconstruction algorithm already developed will be evaluated on these Monte-Carlo data.
These developments will mainly use Python and GATE/Geant4 languages.
Candidates are invited to contact the person in charge of the thesis subject by attaching a CV with a letter of motivation and the last transcript (the one of the previous year and the one of the current semester, if available).
[1] J Krimmer et al., Prompt-gamma monitoring in hadrontherapy: A review, Nucl. Instrum. Methods A 878 (2018) 58-73
[2] S. Marcatili et al., Ultra-fast prompt gamma detection in single proton counting regime for range monitoring in particle therapy, Phys. Med. Biol. 65 (2020) 45033
[3] M. Jacquet et al., A time-of-flight-based reconstruction for real-time prompt-gamma imaging in protontherapy, Phys. Med. Biol. 66 (2021) 135003
The imXgam research team conducts interdisciplinary research activities for imaging applications of ionizing radiation in the health and energy fields. It participates in the ClearMind project whose objective is to develop an optimized detector for highly time-resolved applications, in particular for time-of-flight positron emission tomography (PET).
The measurement of the time of flight of a pair of annihilation photons, i.e. the time between the detection of the two 511 keV photons, allows to constrain the tomographic inversion within a back-projection range determined by the accuracy of the time-of-flight measurement, which is given by the coincidence time resolution (CTR). Knowing that the speed of light in vacuum is 30 cm/ns, a CTR of 10 ps FWHM would allow to localize the electron-positron annihilation with an accuracy of 1.5 mm FWHM, which would be sufficient to obtain an image of the distribution of annihilation points virtually without reconstruction and thus limit the dose required to obtain an image quality equivalent to that of the clinical PET cameras. Currently, state-of-the-art cameras achieve a CTR of 215 ps FWHM. The objective of the ClearMind project is to improve the temporal resolution of the detectors by using a scintillating lead tungstate (PWO) crystal as the input window of a microchannel plate photomultiplier tube (MCP-PMT) and to deposit a photocathode directly on the inner face of the PWO crystal in order to avoid total reflections of scintillation and Cherenkov photons on the PWO/photocathode interface to improve the collection of Cherenkov photons whose emission is practically instantaneous when a photoelectric electron is emitted at a speed higher than the speed of light in the PWO [1].
A mechanical bench for tomographic experimentation called tomXgam has been built at CPPM on which the first prototypes obtained in the framework of the ClearMind project are mounted. The objective of the internship is to participate to the first measurement campaign with the scintronic detectors ClearMind on tomXgam.
Candidates are invited to contact the person in charge of the thesis subject by attaching a CV with a letter of motivation and the last transcript (the one of the previous year and the one of the current semester, if available).
[1] D. Yvon et al., Design study of a scintronic crystal targeting tens of picoseconds time resolution for gamma ray imaging: the ClearMind detector, J. Instrum. 15 (2020) P07029
Context
The imXgam research team conducts interdisciplinary research activities for imaging applications of ionising radiation in the field of health and energy. The internship topic proposed here aims at improving the temporal performance of gamma ray detectors in the context of time-of-flight positron emission tomography (PET).
The coincidence time resolution (CTR) of state-of-the-art clinical TOF-PET cameras is around 210 ps FWHM. A CTR of 10 ps FWHM would allow the position of an electron-positron annihilation to be located at better than 1.5 mm FWHM, making it possible to obtain a PET image virtually without tomographic inversion [1]. One possible way to improve the temporal performance of detectors is to exploit the Cherenkov radiation generated by the motion of photoelectrons in a transparent interaction medium [2]. If this transparent medium is also scintillating, then two types of visible light photons are emitted with temporal distributions different from each other, the first almost simultaneously by the Cherenkov effect and the second slightly delayed by de-excitation of a radiative centre causing the scintillation phenomenon [3]. The photons are then likely to undergo reflections from the faces of the transparent medium before being collected by a photodetector(s) in order to accurately label the photoelectric interaction with a detection time. The existence of different temporal distributions makes the measurement of CTR complex [4].
Aim of the internship
The aim of the internship is to determine whether deep learning techniques can be used to label these two populations of visible light photons given their time and location of detection and, if so, to accurately date the photoelectric interaction in order to improve the temporal resolution of the coincidence. The trainee will build Monte Carlo datasets using GATE [5] to simulate the interaction of 511 keV gamma rays in a scintillating medium and exploit Monte Carlo truth to learn the position of the photoelectric interaction, the type of emission (Cherenkov or scintillation) and the time of the interaction, and then seek to reduce the dimension of the phase space as much as possible while preserving the accuracy of the observables retrieved by deep learning.
Required knowledge: Python programming, knowledge of radiation-matter interactions, notions of deep learning (DL)
[1] P. Lecoq, C. Morel et al. Roadmap towards the 10 ps time-of-flight PET challenge, Phys. Med. Biol. 65 (2020) 21RM01
[2] S.K. Kwon et al, Ultrafast timing enables reconstruction-free positron emission imaging, Nat. Photon. (2021) https://doi.org/10.1038/s41566-021-00871-2
[3] D. Yvon et al, Design study of a scintronic crystal targeting tens of picoseconds time resolution for gamma ray imaging: the ClearMind detector, J. Instrum. 15 (2020) P07029
[4] J. Nuyts et al. Estimating the relative SNR of individual TOF-PET events for Gaussian and non-Gaussian TOF-kernels, in Proc. Fully-3D'2021, G. Schramm, A. Rezaei, K. Thielemans and J. Nuyts eds, pp. 19-23.
The imXgam research team conducts interdisciplinary research activities for imaging applications of ionising radiation in the field of health and energy. The internship topic proposed here aims at improving the performance of an automatic segmentation process of liver tumours in the context of small animal CT.
Context
This internship is part of the DePIcT project financed by the Mission pour les Initiatives Transverses et Interdisciplinaires of the CNRS (https://miti.cnrs.fr/projet-multi-quipe/depict/ , https://www.in2p3.cnrs.fr/fr/cnrsinfo/palmares-des-80prime-2020-4-projets-pilotes-par-lin2p3-decrochent-un-financement). As part of a preclinical study on hepatocellular carcinoma, longitudinal in vivo follow-ups are carried out using the PIXSCAN-FLI, a photon counting micro-CT developed at the CPPM. It has been demonstrated that photon counting guarantees very high contrast on the 3D images.
In addition, the ultra-fast acquisition (100 images per second) allows the capture of the respiratory movements of the mouse. This study is based on an imaging protocol established at the CPPM (Cassol et al. 2019), which consists in labelling the liver with Barium nanoparticles, a contrast agent absorbed by liver macrophages. Tumours appearing in negative can then be observed and characterized due to the radiopacity of the contrast agent surrounding the tumours. This technique allows the differentiation of liver from tumours and the estimation of a series of important tumour parameters over time (size, shape, etc.)
Objectives
The aim of the internship is to implement and evaluate the performance of a state-of-the-art Deep Learning method in micro-CT for automatic segmentation (Léger et al. 2018, Brion et al. 2020). The aim here is to automatically segment the liver as well as liver tumours. The trainee will be able to rely on a large real-world database for which tumour segmentation by an expert is already available. If these methods prove satisfactory, they will be incorporated into the PIXSCAN-FLI automatic data processing pipeline for routine use.
This study may include analysis and correction of mouse breathing movements to improve the sharpness of 3D images.
Skills required: Python programming, deep learning. Knowledge of the context and physics of CT imaging will be appreciated.
Bibliography
E. Brion et al, Domain adversarial networks and intensity-based data augmentation for male pelvic organ segmentation in cone beam CT in Computers in Biology and Medicine https://dial.uclouvain.be/pr/boreal/object/boreal:245104
J. Léger et al, Contour Propagation in CT Scans with Convolutional Neural Networks in Advanced Concepts for Intelligent Vision Systems https://dial.uclouvain.be/pr/boreal/object/boreal:203221
F. Cassol et al, Tracking dynamics of spontaneous tumours in mice using Photon Counting Computed Tomography, iScience 21 (2019) 68-83 https://www.sciencedirect.com/science/article/pii/S2589004219303943
Internship M1
The data acquisition and trigger electronics of the ATLAS liquid argon calorimeter will be fully replaced as part of the second phase of upgrade of the ATLAS detector. The new backend electronics will be based on high-end FPGAs that will compute on-the-fly the energy deposited in the calorimeter before sending it to the trigger and data acquisition systems. New state-of-the-art algorithms, based on neural networks, are being developed to compute the energy and improve its resolution in the harsh conditions of the HL-LHC.
The candidate is expected to take a role in the development of data processing algorithms allowing to efficiently compute the energies deposited in the LAr calorimeters in the high pileup conditions expected at the HL-LHC. These algorithms will be based on AI techniques such as recurrent neural networks will be adapted to fit on hardware processing units based on high-end FPGAs. The successful candidate will be responsible of designing the AI algorithms, using python and keras, and assessing their performance. The candidate will also assess the effect of employing such algorithms for electromagnetic object reconstruction (especially at trigger level). She/he will work closely with the engineers designing the electronic cards at CPPM in order to adapt the AI algorithm to the specifics of FPGAs. Candidates with a strong interest for hardware will be encouraged to take part in the design of the firmware to program the FPGAs.
Prior knowledge of keras, python and C++ is desirable but not mandatory.
Our group is developing machine learning for embedded trigger systems for the ATLAS detector, and as such the developed neural networks need to be optimized for speed and resource consumption on Field Programmable Gate Arrays (FPGAs). The networks are developed using Tensorflow/Keras which is a Python library, but historically C++ has been the language used in particle physics for its speed. While Python has a reputation as a slow language, that is not anymore the case with new methods such as Numba that uses LLVM and Just In Time (JIT) compilation to optimize the performance. The purpose of this internship is to develop methods for fast data processing with Python that would replace and complement the existing parts written in C++ or (slow) Python. This work consists of writing and evaluating Numba as a method to speed up Python used in the data processing and analysis as well as possibility to merge Python with C++, such as pybind11. This internship would also consist of developing neural networks with tools suitable to create more optimized networks that would retain their original accuracy, but be compressed to run fast and consume low resources when running on FPGAs.
The project is done in English. The candidate is expected to have knowledge in Python programming and good communication skills in English. The project length is expected to be 2 to 3 months and can start in spring 2022.
Being forbidden in the Standard Model (SM) of particle physics, lepton flavor violating decays are among the most powerful probes to search for physics beyond the SM. Several new physics models predict branching fractions of decays just below the current experimental limits, hence the interest of this subject.
The Belle II experiment located at KEK, Japan, started to take data in 2019, aiming at collecting 50 times more data than its predecessor, Belle, by 2031. The goal of this internship is to exploit the Belle II data in order to obtain the best experimental limits on lepton flavor violating decays such as , where X is a hadronic system and an electron or a muon.
Activities:
Data analysis possibly using Machine Learning techniques.
Work context:
This internship will take place at CPPM, Marseille (https://www.cppm.in2p3.fr/web/en/index.html).
Application must include a CV and grade records.
References:
https://arxiv.org/abs/1808.10567
https://arxiv.org/abs/1903.11517
https://arxiv.org/pdf/2103.16558.pdf
https://arxiv.org/pdf/1806.05689.pdf
The CPPM is developing an application to present the laboratory's activities to a high school audience. The digital communication assistant trainee will be part of the project team and will contribute to the production of multimedia contents, with several components: an editorial section on fundamental topics, video interviews presenting the jobs and skills, knowledge learning tests. The duration of the internship is 3 months.
Time-domain astronomy has received a considerable boost in recent years due to its ability to study extreme physics, to track cataclysmic phenomena like the birth of stellar mass black holes or the mergers of neutron stars, to probe distant
regions of the Universe, and to identify candidate sources for multi-messenger astrophysics. These explosive events can release enormous amounts of energy both in electromagnetic radiation and in non-electromagnetic forms such as neutrinos
and gravitational waves. They lie at the frontier of our understanding of the laws of physics under the most extreme conditions. Multi-messenger astronomy the observation of astrophysical objects and processes using combinations of different messengers such as electromagnetic radiation, neutrinos, cosmic rays and gravitational waves has emerged as a major new field in astronomy during the last years.
In CPPM, we are mainly working on the development of multi-messenger analyses with high-energy neutrinos detected with ANTARES and KM3NeT neutrino telescopes. In this context, we are developing a real-time analysis framework that is able to send neutrino alerts and to receive and process a cross-match analysis with high-energy neutrinos. In the next years, the LSST telescope in Chile will be one of the major discover of optical transients. Around a million triggers are expected each night. To account for these large numbers, LSST is developing some brokers to filter the alerts. In France, some colleagues are implementing the FINK broker (https://arxiv.org/abs/2009.10185). Some actual data are available with the ZTF telescope in US.
During this intern ship, the student will implement a filter chain in the broker to identify the most interesting candidates for the neutrino searches. It will filter on the nature of the transient, the number of detections, the light-curve and some cross-matches with astrophysical catalogues.
The analyses will be performed using C++ or python.
The KM3NeT experiment is a next-generation neutrino telescope, currently under construction in the two sites: ORCA (Oscillation Research with Cosmics in the Abyss) and ARCA (Astroparticle Research with Cosmics in the Abyss).
The ORCA site foresees the installation of 115 detection units (DUs), at 2500m depth in the Mediterranean Sea off Toulon and it is optimized for the atmospheric neutrino detection in the 3-100 GeV energy range, allowing for precision studies of neutrino oscillation parameters. Currently, only partially instrumented, KM3NeT/ORCA has already been operating for several months and it is collecting high-quality data. During this internship, the student will have the opportunity to become familiar with neutrino physics and analyze the KM3NeT/ORCA data collected so far, for studying neutrino oscillation properties. It is expected that the candidate will pursue similar studies in an M2 internship and eventually a Ph.D. in our group.
We are offering a challenging internship opportunity for individuals passionate about the intersection of artificial intelligence and astrophysics. The successful candidate will work on applying advanced AI algorithms to spectrophotometric data obtained from the Near Infrared Spectrometer and Photometer (NISP) instrument aboard the European Space Agency's Euclid Space Telescope. The primary focus of the internship will be on utilizing Python programming skills to analyze and interpret the data, gaining insights into the behavior of the NISP instrument.
Responsibilities:
Implement and optimize AI algorithms for the analysis of Euclid Space Telescope spectrophotometric data.
Collaborate with the research team to understand the intricacies of the NISP instrument and its observational data.
Develop Python scripts and code to preprocess, analyze, and visualize the data.
Contribute to the documentation of the developed algorithms and methodologies.
Participate in team meetings and discussions to share progress and insights.
Prior knowledge of Python is mandatory, and having an experience with machine learning would be much appreciated.
The accelerated expansion of the Universe and the mass of the neutrino species are fundamental problems in physics that can be tackled with cosmological observations. The Dark Energy Spectroscopic Instrument is an ongoing spectroscopic survey mapping the three-dimensional distribution of matter in the Universe through precise redshift measurements. At high-redshifts (2
The imXgam research team conducts interdisciplinary research activities for imaging applications of ionizing radiation in the health and energy fields. It participates in the ClearMind project whose objective is to develop an optimized detector for highly time-resolved applications, in particular for time-of-flight positron emission tomography (PET).
The measurement of the time of flight of a pair of annihilation photons, i.e. the time between the detection of the two 511 keV photons, allows to constrain the tomographic inversion within a back-projection range determined by the accuracy of the time-of-flight measurement, which is given by the coincidence time resolution (CTR). Knowing that the speed of light in vacuum is 30 cm/ns, a CTR of 10 ps FWHM would allow to localize the electron-positron annihilation with an accuracy of 1.5 mm FWHM, which would be sufficient to obtain an image of the distribution of annihilation points virtually without reconstruction and thus limit the dose required to obtain an image quality equivalent to that of the clinical PET cameras. Currently, state-of-the-art cameras achieve a CTR of 215 ps FWHM. The objective of the ClearMind project is to improve the temporal resolution of the detectors by using a scintillating lead tungstate (PWO) crystal as the input window of a microchannel plate photomultiplier tube (MCP-PMT) and to deposit a photocathode directly on the inner face of the PWO crystal in order to avoid total reflections of scintillation and Cherenkov photons on the PWO/photocathode interface to improve the collection of Cherenkov photons whose emission is practically instantaneous when a photoelectric electron is emitted at a speed higher than the speed of light in the PWO [1].
A mechanical bench for tomographic experimentation called tomXgam has been built at CPPM on which the first prototypes obtained in the framework of the ClearMind project are mounted. The objective of the internship is to participate to the first measurement campaign on tomXgam in order to confirm its readiness and to characterize the performances of detectors embedded on this device.
Candidates are invited to contact the person in charge of the thesis subject by attaching a CV with a letter of motivation and the last transcript (the one of the previous year and the one of the current semester, if available).
[1] D. Yvon et al., Design study of a scintronic crystal targeting tens of picoseconds time resolution for gamma ray imaging: the ClearMind detector, J. Instrum. 15 (2020) P07029
Technical Internship
Le CPPM (CNRS et Aix-Marseille Université) a pour mission d'explorer et d'accroitre ses connaissances dans le domaine de la physique de la matière et de l'Univers.
Le Correspond Formation du CPPM est chargé de recueillir les besoins en formation du personnel afin de les retranscrire dans un document appelé Plan de Formation du L'Unité.
Missions
Le/la stagiaire rejoindra le service administratif du CPPM sous la responsabilité de Madame Isabelle Richer-Gonzalez, Correspondante Formation afin de l'assister dans la réalisation du plan de formation.
Activités principales
Le ou la stagiaire pourra réaliser les missions suivantes :
Recueil des demandes de formation
Établissement d'un bilan et de tableaux de bords des actions de formation passées
Établissement du bilan des actions demandées / actions réalisées
Aide à la rédaction du plan de formation de l'unité
Diffusion de l'information et des annonces de formation
Suivi des dossiers de demandes individuelles
Connaissances appréciées
Droit de la formation
Dispositifs de la formation continue
Bonne maîtrise des outils bureautiques (Excel, Word, Powerpoint )
Techniques de présentation écrite et orale
Compétences opérationnelles
Aptitudes relationnelles
Capacités de synthèse et d'analyse
Conception de tableaux de bord
Formations possibles
Ce stage conviendrait par exemple à un étudiant (Bac+1 ou Bac+2) préparant un diplôme en ressources humaines, un BTS ou un DUT Gestion des Entreprises et des Administrations (GEA) option « Ressources humaines ».
Stage pouvant aller de 4 et 7 semaines entre le 15 mai et 15 juillet 2023.
Contact : CV + lettre de motivation à Isabelle Richer-Gonzalez - Correspondant formation du CPPM
Tél : +33 4 91 82 72 26- Mél : richer@cppm.in2p3.fr
Le Centre de Physique des Particules de Marseille, unité mixte de recherche CNRS/Aix-Marseille Université a conçu et livré le système d'acquisition des données de l'expérience LHCb : lecture de l'ensemble des données (30 Tbits/s) des sous-détecteurs via 10000 liens optiques à 10 Gbits/s, traitement en temps réel puis envoi vers une ferme de calcul. Le CPPM développe à présent la prochaine génération qui devra atteindre une puissance de calcul 10 fois supérieure au système actuel. Pour parvenir à cela, Intel nous confie, en avant-première, son plus puissant FPGA doté de 4 millions d'éléments logiques et de transceiver capables de transmettre jusqu'à 112Gb/s.
Activités principales
Il s'agit de participer à la caractérisation de la carte prototype du futur système. La mission proposée sera définie avec le/la candidat(e), selon ses goûts et ses connaissances~: développement de logiciels, de firmwares ainsi que des techniques d'instrumentation et simulation.
Mesure du diagramme de l'il en temps réel pour vérifier la qualité des
liens sériels~
Mesures de la consommation électrique et dissipation thermique de la carte
Firmware pour la distribution d'horloge avec une précision 0 (10)ps
Mise au point d'un banc de test de l'interface sérielle avec protocole du
CERN
Conception de driver C++ pour interfacer les bus PCIe, JTAG de la carte
Système de monitoring automatique de la carte (courants, tension,
température)
Application, en langage Python, pour configurer les périphériques de la
carte
Développement de routines de tests avec Pytest pour l'automatisation de
bancs de tests
Développement d'application pour co-processeur ARM intégré au FPGA
Le ou la stagiaire sera accueilli(e) au sein du service électronique du CPPM qui possède un savoir-faire étendu dans la conception de carte FPGA à haute densité. Le travail s'effectuera dans un environnement de recherche international, en intéraction forte avec les équipes de Intel, quelques déplacements au CERN (Genève) seront possibles pour des réunions de collaboration.
Connaissances appréciées
Les compétences suivantes seront appréciées, et une formation sur les outils utilisés sera donnée~:
Utilisation d'appareils de mesure~: Serial Data analyser, analyseur de
spectre
Simulations intégrité de signal et de puissance
Électronique analogique, numérique et transmission de signaux rapides
Conception de firmware FPGA en langage VHDL en utilisant Quartus
Conception logicielle, langage Python (Polars, Pytest), C++ et
éventuellement PyQt
Contact : CV + lettre de motivation avec la référence « Ingenieur-2324-EL-03 » à
Frédéric HACHON, Ingénieur de Recherche - Correspondant stages techniques du CPPM Tél : +33 4 91 82 76 71- Mél : hachon@cppm.in2p3.fr
Le stage de 6 mois sera conventionné et rémunéré
Le stage se déroule au Centre de Physique des Particules de Marseille (CPPM). C'est est une unité mixte de recherche (UMR 7346) qui relève de l'IN2P3, institut regroupant les activités de physique des particules et de physique nucléaire au sein du CNRS et d'Aix-Marseille Université.
Le CPPM participe depuis plusieurs années au projet à l'expérience ATLAS du CERN à Genève, au sein d'une collaboration internationale de plus 3 000 scientifiques issus de 174 instituts, représentant pas moins de 38 pays.
L'un des grands challenges techniques au niveau de l'expérience ATLAS réside dans l'augmentation du nombre de données à acheminer depuis le détecteur vers les centres de calculateurs et les circuits intégrés spécifiques très haut débit et durcis contre les irradiations sont des éléments essentiels pour la transmission de ces données.
Le CPPM fait partie d'une collaboration R\&D du CERN (RD53) pour le développement de la puce électronique de lecture des détecteurs à pixels pour les futures mises à niveau concernant l'ensemble des expériences du LHC. Cette puce a été développée avec le process CMOS 65 nm et comporte 160000 pixels de 50×50 µm2.
La haute luminosité anticipée dans le cadre des futurs upgrades du LHC ou pour les prochaines générations de collisionneurs se manifeste par un très fort taux de hits par unité de surface au niveau des différents détecteurs et particulièrement dans le cas des détecteurs à pixels proches du point d'interaction. Dans ce contexte, une bonne précision spatiale qui passe par la réduction de la taille des pixels s'avère indispensable. L'ajout d'informations temporelles de haute précision à la précision spatiale existante (4D tracking), est une caractéristique intéressante qui pourrait ouvrir la voie à de nouvelles améliorations de la reconstruction des traces.
Activités principales
Le but du stage est de proposer une architecture de l'étage frontal analogique de détection de charge composé d'un l'amplificateur de charge et d'un discriminateur. L'objectif est d'atteindre une résolution temporelle de 50 ps tout en maintenant une consommation réduite et un niveau de bruit électronique compatible avec les exigences du seuil d'énergie requis dans le détecteur de traces.
Le stage de 6 mois sera organisé en plusieurs étapes :
Etude du système de détection de traces actuel utilisé pour ATLAS
Etude, conception et optimisation du circuit amplificateur de charge
actuel avec le process CMOS 28 nm
Simulation et optimisation du circuit sous Cadence Virtuoso
Dessin des masques sous Cadence
Connaissances requises et appréciées
Bonnes connaissances en conception de circuits analogiques CMOS
Le développement de bancs de test basés sur des composants programmables
de type FPGA est considéré comme un avantage
Suite au stage : possibilité de poursuite en Thèse de Doctorat
Contact : CV + lettre de motivation avec la référence « Ingenieur-2324-EL-02 » à
Frédéric HACHON, Ingénieur de Recherche - Correspondant stages techniques du CPPM Tél : +33 4 91 82 76 71- Mél : hachon@cppm.in2p3.fr
BelleII est un détecteur polyvalent du collisionneur SuperKEKB au Japon. Il a été conçu et construit afin de tester de nouveaux modèles de physique et rechercher les signatures de nouvelles particules. Le détecteur BelleII est un détecteur de particules qui mesure 7,5 m de long, 7 m de haut. Il est composé principalement d'un détecteur de vertex et d'un calorimètre. En augmentant la luminosité de l'accélérateur de particule SuperKEKB, le sous-ensemble Vertex de BelleII devra aussi évoluer et être mis à jour d'ici 2026.
Une collaboration internationale s'est ainsi structurée afin de réfléchir et concevoir cette jouvence du détecteur.
Au CPPM, un groupe d'une dizaine de physiciens, ingénieurs et techniciens est impliqué dans le projet BelleII et s'intéresse en particulier à l'évolution du détecteur de vertex (VXD), détecteur interne le plus proche du point d'interaction. Ce détecteur de traces (trajectographe) est destiné à suivre le passage des particules dès leur formation.
La brique élémentaire du trajectographe est un circuit intégré spécifique (ASIC) matriciel de plusieurs millions de transistors. Ce circuit opère comme un appareil photo à pixels, qui doit prendre une image de la détection des particules. Plusieurs contraintes de conception sont imposées sur l'électronique, comme la surface, la rapidité, la consommation et la précision. De plus, afin de fonctionner en toute autonomie, le circuit a besoin de fonctions générales, comme un «~bandgap reference~», un capteur de température, un buffer analogique et son ADC, des circuits numériques de décisions et mémoires, ou encore un système de distribution des alimentations ou polarisations des étages. Des étages d'entrée/sortie à hautes vitesses comme les standards LVDS ou CML seront aussi intégrés.
Activités principales
Dans un premier temps, le/la stagiaire doit mener une recherche bibliographique détaillée sur le circuit servant de référence au projet (TJ-MONOPIX2) ainsi que sur les détecteurs à pixels monolithiques et sur les fonctions générales. Ensuite, il lui sera proposé d'étudier et concevoir une des fonctions qui soit le mieux adaptée à l'application selon le cahier des charges fourni.
En fonction de l'avancement du projet, le/la stagiaire aidera l'équipe de conception à finaliser le circuit prototype OBELIX, pour une fabrication courant 2024.
Etude bibliographique sur les architectures de la fonction.
Conception, simulation sous Cadence
Dessin des masques (Layout)
Simulation post-layout
Des tests sur d'anciens circuits sont à prévoir
Connaissances requises
Bonnes connaissances en conception de CI en technologie CMOS
Connaissance dans la manipulation d'instruments de mesure
Contact : CV + lettre de motivation avec la référence « Ingenieur-2324-EL-01 » à
Frédéric HACHON, Ingénieur de Recherche - Correspondant stages techniques du CPPM Tél : +33 4 91 82 76 71- Mél : hachon@cppm.in2p3.fr
Le stage de 6 mois sera conventionné et rémunéré.
Le CPPM travaille notamment sur l'expérience ATLAS basée au CERN à Genève. Cette collaboration internationale comprend 3 000 scientifiques issus de 174 instituts, représentant pas moins de 38 pays. Une de ses missions est notamment d'uvrer pour l'horizon 2029 à une mise à niveau de l'électronique de lecture du calorimètre à argon liquide du détecteur ATLAS.
Une équipe d'Ingénieurs et physiciens du CPPM travaille sur la réalisation d'un tout nouveau système d'acquisition de données et de trigger appelé le processeur LASP. Ce projet consiste à développer une carte prototype au format ATCA à base de deux FPGA INTEL Agilex et un contrôleur MAX10 ainsi que d'une vingtaine de modules optiques. Cette carte calculera en temps réel les énergies déposées dans le calorimètre suite aux collisions dans le LHC. L'un des défis de cette carte est le fait qu'elle doive traiter une énorme quantité de données (500 Gb/s par FPGA), tout en calculant l'énergie en moins de 125 ns. L'arrivée de la carte est prévue pour mi 2024.
Activités principales
Il s'agit de développer un firmware comportant l'implémentation d'un réseau de neurones récurrents en VHDL sur des FPGAs AGILEX. Ce firmware utilisera l'intelligence artificielle pour calculer les énergies déposées dans le calorimètre à partir de signaux transmis par ses capteurs. Les réseaux de neurones sont déjà développés en software par des informaticiens et des physiciens. Il s'agit maintenant de porter les mêmes calculs et algorithmes en firmware VHDL de la façon la plus optimale possible. Ces réseaux nécessitant un grand nombre d'opérations mathématiques, il faut pouvoir les intégrer dans les FPGAs aux ressources limitées et réduire leur temps de calcul à moins de 125 ns. Il s'agit certes d'un défi pour les FPGAs, mais qui est réalisable. Un prototype de ce firmware a déjà été réalisé avec succès pour un FPGA de type Intel STRATIX 10.
Les développements et les activités proposés pourront concerner~:
La réalisation d'un firmware comportant un réseau de neurones
La simulation de ce firmware sous Modelsim pour valider les
fonctionnalités
L'optimisation de l'implémentation sur FPGA pour réduire les ressources et
la latence
Le développement d'un firmware de test capable d'injecter des données de
simulation puis d'extraire le résultat du calcul du réseau de neurones
La validation de ce firmware de test sur un kit de développement AGILEX
Le ou la stagiaire sera accueilli(e) au sein du service électronique qui possède un savoir-faire étendu dans la conception de cartes à haute densité, en instrumentation et dans le développement de firmwares FPGA de dernière génération. L'étudiant(e) bénéficiera du matériel requis pour la réalisation complète du firmware et des tests. Des déplacements au CERN à Genève seront possibles pour présenter ses travaux au groupe et participer à des réunions de la collaboration.
Connaissances appréciées
Développement de firmwares FPGA en VHDL avec Quartus et simulations
Modelsim
Bonnes bases en électronique numérique et en langage Python
Anglais pour les présentations, la documentation technique et la rédaction
de rapports
Contact : CV + lettre de motivation avec la référence « Ingenieur-2324-EL-05 » à
Frédéric HACHON, Ingénieur de Recherche - Correspondant stages techniques du CPPM Tél : +33 4 91 82 76 71- Mél : hachon@cppm.in2p3.fr
Le stage de 6 mois sera conventionné et rémunéré.
Le CPPM travaille notamment sur l'expérience ATLAS basée au CERN à Genève. Cette collaboration internationale comprend 3 000 scientifiques issus de 174 instituts, représentant pas moins de 38 pays. Une de ses missions est notamment d'uvrer pour l'horizon 2029 à une mise à niveau de l'électronique de lecture du calorimètre à argon liquide du détecteur ATLAS.
Une équipe d'Ingénieurs et physiciens du CPPM travaille sur la réalisation d'un tout nouveau système d'acquisition de données et de trigger appelé le processeur LASP. Ce projet consiste à développer une carte prototype au format ATCA à base de deux FPGA INTEL~Agilex et un contrôleur MAX10 ainsi que d'une vingtaine de modules optiques. Cette carte calculera en temps réel les énergies déposées dans le calorimètre suite aux collisions dans le LHC. L'un des défis de cette carte est le fait qu'elle doive traiter une énorme quantité de données (500 Gb/s par FPGA), tout en calculant l'énergie en moins de 125 ns. L'arrivée de la carte est prévue pour mi 2024.
Activités principales
Il s'agit de participer au développement de firwmwares VHDL de tests sur MAX10 et sur AGILEX. Ces firmwares permettront de tester les fonctionnalités des FPGA sur le premier prototype de carte électronique. Ces fonctionnalités concernent les moyens de dialogue UART, I2C et SPI avec différents contrôleurs. Mais elles concerneront aussi l'analyse des liaisons sérielles à très haut débit. Ces firmwares seront validés sur kit développement équipé du même FPGA que la carte LASP. Ils interviendront également dans l'élaboration d'un banc de test de performances.
Activités envisagées durant le stage :
Configuration de la séquence des alimentations des FPGA
Interfaçage avec les périphériques de la carte (senseurs et interfaces
optiques)
Acquisition via des bus I²C des valeurs senseurs de températures,
courants, tensions
Système de monitoring automatique des valeurs analogiques de la carte
Drivers en C pour permettre au MAX10 de dialoguer avec différents
contrôleurs
Firmware FPGA pour l'envoi et la réception de données sérielles sur
plusieurs voies
Caractérisation par diagramme de l'oeil de ces signaux sériels
Le ou la ou la stagiaire sera accueilli(e) au sein du service électronique qui possède un savoir-faire étendu dans la conception de cartes à haute densité et en firmware FPGA. Des déplacements au CERN à Genève seront possibles pour présenter les activités lors de réunions de collaboration.
Connaissances requises
Bonnes bases en électronique numérique, analogique, instrumentation et
mesures
Développement de firmwares FPGA en VHDL sous Intel Quartus
Développement de sofwares embarqués en C sur cible microcontrôleur
Rédaction de procédures de tests et de rapports
Maîtrise des outils bureautiques et de l'anglais technique
Contact : CV + lettre de motivation avec la référence « Ingenieur-2324-EL-04 » à
Frédéric HACHON, Ingénieur de Recherche - Correspondant stages techniques du CPPM Tél : +33 4 91 82 76 71- Mél : hachon@cppm.in2p3.fr
Le stage de 6 mois sera conventionné et rémunéré.
Dans le cadre d'une infrastructure sous-marine complexe destinée à s'agrandir, nous souhaitons optimiser les futurs Noeuds, de grosses structures en titane distribuant l'énergie et les fibres optiques vers des lignes de détection à neutrinos de 200m de haut.
Ces Noeuds, immergés à 2500m de fond, auront pour base les deux structures précédemment immergées. Il s'agira de les optimiser : châssis, systèmes de connexion des interlinks opérables par sous-marin, prise en compte des évolutions imposées. L'étude doit concerner également l'aspect réalisation (contraintes, coûts, moyens, etc) et s'assurer de la conformité aux impératifs des opérations en mer.
L'astronomie des neutrinos et activités pluridisciplinaires dans le cadre du projet KM3-NeT / Numerenv. Les télescopes à neutrino ont pour but d'étudier les neutrinos cosmiques de haute énergie avec un réseau de photo-détecteurs installé au fond de la mer.
Pour alimenter ces lignes de détections, des connecteurs hybrides sont utilisées et la connexion est faite par des petits robots sous-marins.
Nous nous inspirons d un outillage de démultiplication utilisé au laboratoire en augmentant la démultiplication.
L'élève travaillera en étroite collaboration avec un ingénieur. Des études en CAO seront confiées ainsi que du suivi de sous traitance .
The Euclid mission (http://www.euclid-ec.org) is a major project by ESA that launched a space telescope dedicated to understanding the Universe in July 2023. Through a survey of the entire sky, it will provide a 3D-mapping of galaxies with unprecedented precision. These measurements of the distant Universe large structures will test the cosmological model, particularly questioning the nature of dark energy. The mapping will be achieved using the NISP spectrophotometer and its 16 infrared detectors, which were calibrated on ground by CPPM, a fundamental step to validate the instrument's performance.
Main Activity
NISP's infrared detectors were specifically developed for the Euclid mission. At the cutting edge of technology, each one consists of a matrix of 2048 x 2048 pixels. Their fine calibration was carried out at CPPM, resulting in the recording of 500 Terabytes of data to be analyzed. These data clearly show the presence of persistence that contaminates the data during several hours of acquisition. In order to gain a better understanding of this phenomenon, the intern will work on characterizing the persistence and understanding the influence of environmental parameters on it.
Thus, the intern will need to apply classical or more sophisticated analysis methods in the following steps:
Implement the methods in Python.
Extract parameters such as time constants and amplitudes from the existing calibration data.
Analyze correlations between persistence and environmental data.
Conduct a similar study on a detector of a different technology and compare the results.
Required Knowledge
Strong foundation in programming in Python
Good knowledge of signal processing
Good knowledge of semiconductor physics
The 6-month internship will be conventionally recognized and paid. It may be lead to Ph.D. work with CNES funding (application in progress).
\bf{Contact~:} CV + cover letter
Aurélia Secroun, Research Engineer
Tel : 04 91 82 72 15 mail : secroun@cppm.in2p3.fr
The Center for Particle Physics in Marseille is a mixed research unit (UMR 7346) affiliated with the CNRS and Aix-Marseille University. It conducts research activities in both the field of fundamental physics and applications based on ionizing radiation.
Satiety or addiction circuits in the brain are regulated by negative or positive feedback loops using neurotransmitters. These circuits can be imaged using positron emission tomography (PET) by labeling neurotransmitters with positron-emitting radioactive ions, such as 11C-labeled cocaine.
However, PET scans require subject anesthesia, which does not allow for the assessment of the actual brain behavior under awake conditions.
CPPM is involved in the MAPSSIC project, which aims to develop an intracranial CMOS pixel probe for positron imaging in awake and freely moving rats. The IMIC probe, consisting of several hundred active CMOS pixels, was developed by IPHC in Strasbourg to be permanently implanted in a rat's brain. Equipped with a backpack containing a battery and a wireless transmitter connected to the CMOS pixels, it allows for direct imaging of positrons emitted during the decay of a radioactive tracer attached to the molecules of the neurotransmitter under study.
Main Activity:
The intern will be part of the MAPSSIC project and will contribute to the study of the design and implementation of a wireless solution for control-command and data transmission collected simultaneously by 4 IMIC probes to an acquisition PC. The goal of the internship will be to optimize the system's power consumption. The components involved include µC STM32, nRF24, and Winbond flash.
The main system is programmed in Rust.
This wireless solution must be integrated into a backpack suitable for a rat's size and should achieve several hours of autonomy, corresponding to multiple periods of decay of the radioactive tracer used to label the neurotransmitter.
Profile Requirements:
Proficiency or interest in Rust
Experience with C/C++ programming
Embedded systems programming (µC) in C/C++
Knowledge of Python is a plus
The 6-month internship will be paid.
Bachelor Internship
CPPM welcomes students from bachelor levels (L1, L2 and L3) for an intership.
Applications for internships are centralized by William Gillard. To apply, send him a cover letter with your CV, your latest grades and your contact details so that he can get back in touch with you. The administrative file will be followed by Jocelyne Munoz.
Contacts : William Gillard, Jocelyne Munoz
Secondary School
We welcome college and high school students for internships for defined periods of time. All requests must be justified but cannot be accepted, given the limited number of places.
-
for college level: one week in December (before the Christmas holidays)
-
for high school students: one week in June (during the baccalaureate exam period)
Exceptionally, no hosting in 2023
Contact : Jocelyne Munoz
TIPE
Since 1998, CPPM accomodates pupils of preparatory classes in order to help them carry out their TIPE.
Most of them obtained, at the time of their TIPE test, a higher grade than the national average and succesfully integrated an engineering school.
Contact: Heide Costantini