If you’re a Master’s/PhD student who likes to explore cutting-edge research at the intersection of cybersecurity, AI, and applied cryptography, Orange Services offers the right playground.
As one of the largest IT hubs within the Orange Group, we work internationally for both corporate functions and country operations. Through a unique combination of know-how and expertise, our teams deliver a broad range of IT & Business Solutions.
Come closer to #LifeAtOrange.
The internship will contribute to a concrete, technical work plan around:
Privacy-Preserving Machine Learning (PPML): federated learning, secure aggregation/MPC, TEEs, and selective encrypted computation (incl. FHE feasibility where justified)
Adversarial robustness and integrity under privacy constraints (evasion, poisoning, backdoors) and evaluation methodology
What you’ll do
Define, build, and refine threat models and evaluation scenarios for adversarial manipulation, poisoning/backdoors, and privacy attacks relevant to PPML
Prototype and evaluate one or more components (e.g., robust aggregation, privacy-preserving training/inference pipeline, TEE-based confidential inference, selective FHE feasibility study)
Document results clearly (technical notes, experiment reports, and contributions to the proposal work packages)
What we bring
A friendly environment where you can grow as a researcher/engineer
Mentoring, training opportunities, and support to attend relevant conferences/meetups
An agile, international environment with strong security and AI expertise
Modern tools, real-world cybersecurity constraints, and research-to-prototype opportunities
What we expect you to bring
Enrolled in a Master’s or PhD program (security, ML, cryptography, or related)
Strong curiosity and motivation for research and hands-on experimentation
Basics in machine learning and security; interest in applied cryptography/PPML
Programming skills (Python required; experience with PyTorch/TensorFlow is a plus)
Clear communication and good energy in a collaborative team