Principal Researcher in Data Centre Networks

Switzerland
On-Site
Permanent

Job Title: Principal Researcher – Data Centre Networks
Location: Zurich, Switzerland
Job Type: Permanent

Relocation Required // on-site working required // visa sponsorship available!

In this role, you will be responsible for the research and set up the direction of DCN services and architecture, key DCN algorithms and protocols. You will also help the team to complete key technological innovations, and build key technical breakpoints.

Responsibilities:
Conduct cutting-edge research on the next-generation Data Center Network architecture and new technologies for the large-scale training and inference systems
Collaborate with a wide range of colleagues and stakeholders: Researchers and Experts in Switzerland and around the world, Swiss and European Universities, and Product teams.
Optimize the performance and efficiency of parallel computing systems, especially the large-scale AI clusters mainly from the perspective of the integration of network, computing, and storage systems
Optimize the performance of inference system, considering advanced techniques such as Prefill-Decode Disaggregation, KVcahce pooling, etc.
Identify technology trends in the field of Data Centre Networking, investigate new communication protocols for accelerating LLM training and inference systems
The scope includes but is not limited to network topology, traffic control technology, software-defined networks, optical and electrical hybrid networks, in-network computing, etc.

Requirements
PhD in Computer Science, Electronic Engineering, Artificial Intelligence, Automation, Mathematics, Physics and other relevant discipline.
At least 5 years of work experience in Data Center Network or large-scale training and inference systems research either in industry or academia.
Proficiency in data center networks and high-performance interconnection technology systems.
Sound experience in performance or efficiency optimization of parallel computing systems.
Solid knowledge of LLM architecture, parallelism policies and model optimization mechanisms
Strong knowledge in large-scale distributed deep learning and be familiar with corresponding infrastructure and key technologies.
Excellent communication skills in English, both written and verbal
Ability to engage with a multicultural team both locally and across multiple global sites
Team spirit with the ability to work independently
 

Candidates should have research experience and strong cross-domain knowledge in the following areas:
Infrastructure and key technologies of large-scale training/inference combined with systematic thinking on how to improve the system scale and efficiency.
Modern network technology or high-performance interconnection, cloud DCN technologies, including network architecture, resource pooling techniques, optical network techniques, etc.  
Performance or efficiency optimization of parallel computing systems. Research on distributed machine/deep learning, distributed storage, high-performance computing systems, including network architecture, networking solutions, and the design of new device forms.
Network theory and optimization algorithms, focusing on DCN business service assurance technology, network congestion control, traffic scheduling/forecasting, TCP/RDMA acceleration, and low-latency assurance for DCN, etc.
New data center architectures, new topologies, new protocols, new hardware, and algorithms, such as CXL bus protocol extensions, non-CLOS topologies, GPU cluster interconnections, and DPU offloading, among other new technological directions

If you're interested in learning more, please reach out to daniel@microtech-global.com for more info!

 

19003U3F
© 2025 microTECH Global Limited
Headquarters: Park House, Park Street, Maidenhead, Berkshire SL6 1SL
Bristol, UK: Office 202, Origin Workspace, 40 Berkeley Square, Bristol BS8 1HP
Bengaluru, India: FF-2 Ozone Whites, Doddanaga Mangal, Electronic City Phase-2, Bengaluru, Karnataka 560100, India
This site uses cookies, by browsing the site you are agreeing to their use. To find out how we use them please read our Cookie Policy. Hide