- FUTURE FIT | The Solopreneur Newsletter
- Posts
- Entry Jobs in Humane Tech
Entry Jobs in Humane Tech
DeepMind, Anthropic AI, Project Liberty, New_Public, GovAI

👋 Hello, Humane Tech Pioneers!
Welcome to your weekly dose of Careers in Humane Tech newsletter! This week, we will introduce the Future FIt 5 Entry-Level Job Opportunities paving the way to a more equitable digital future.
Ever stumbled upon a deepfake so convincing it made you do a double-take? Or found yourself spiraling down a social media rabbit hole filled with toxic content, wishing for a way out? Perhaps, you've even hit the 'deactivate' button to escape cyberbullying. These are the moments where trust and safety specialists, AI governance experts, and tech policy advocates step in. They're building a more humane digital virtual world.
Want to join the fight for a better online experience? Then explore these Future Fit 5 Entry-Level Job Opportunities!

with Project Liberty
(2 yrs experience), New York, USA
Mission:
Project Liberty seek to accelerate the world's transition to an open, inclusive data economy that puts citizens in control by building a better, more ethical web. Project Liberty aims to catalyze the building of a healthier technology ecosystem by forming an alliance of ethical tech companies, policy organizations, civil society, universities, and impact organizations. This collective effort seeks to usher in a new tech future that enhances democracy, improves societal well-being, and promotes open platforms, allowing for widespread benefits from online creativity and connection.
What You'll Do:
Management: Play a key role in managing and enhancing partnerships with top universities for Project Liberty Foundation, ensuring impactful research and content generation, while fostering effective communication and collaboration.
Drive Policy Outreach: Be at the heart of initiatives blending tech, policy, and civic engagement to frame a more ethical web. Coordinate outreach and engagement across various sectors and with university partners to promote Project Liberty's academic partnerships and research outcomes, including event planning and support.
Internal collaboration: Support and contribute to Project Liberty's goals through cross-functional collaboration on campaigns and initiatives, creating internal synergies, and promoting PL values, along with performing relevant tasks.
In-demand Skills:
Critical Thinking & Analytical Acumen Showcase your prowess in qualitative research, bolstered by quantitative skills for comprehensive data and policy analysis.
Communication Excel in articulating complex ideas clearly, fostering understanding and collaboration.
Company Culture:
Join a team of visionaries committed to agency & trust, pluralism & openness, and innovation & collaboration.

with New_ Public
(0-2 yrs experience), Remote, USA
Mission:
New_ Public is dedicated to fostering healthier online communities by curating positive interactions and content. Their mission is to combat the toxicity that pervades digital spaces, making the internet a safer place for dialogue and discovery.
What You'll Do:
Write and engage: Schedule and post regularly to our social media platforms. Elevate messages through appropriate channels and communicate with audiences.
Campaign development and storytelling: Create dynamic, engaging content campaigns tailor-made for particular platforms and own New_ Public’s short-form video efforts.
Research: Read published work, newsletters, and other content to understand the latest with New_ Public’s peer orgs and our niche of responsible and ethical tech. Seek out information that attempts to peer within the black box algorithms of the different platforms and try and understand types of content each is rewarding.
Collaboration and Communication: Collaborate with internal teams to understand ongoing projects and initiatives, and effectively communicate project outcomes and impact externally.
In-Demand Skills:
Proficiency in researching, writing, and editing social media content (microcopy, video scripts)
Experience with video production and post-production (Adobe Creative Cloud)
Familiarity with social media platforms (Instagram, LinkedIn, Threads, Mastodon, YouTube, TikTok)
Knowledge of productivity tools (Google Workspace, Slack, Airtable, Asana, WordPress, Sprout Social).
Communication Skills: Ability to articulate complex concepts clearly and compellingly
Interpersonal Skills: High emotional intelligence for collaborating in diverse, tight-knit teams
Subject Knowledge: Passion for building digital spaces/communities that serve the public good, working knowledge of the U.S. civic tech space
Familiarity with responsible/equitable design and design justice frameworks
Company Culture:
New_ Public's culture is likely characterized by a commitment to free expression, inclusivity, and the well-being of individuals in digital environments. They aim to ensure that technology serves to enhance human dignity and social cohesion, reflecting a dedication to the humane tech challenge. Their culture values respect, creativity, collaboration, and empathy.

with Anthropic (the people of Claude 3)
(2 yrs experience), San Francisco / New York / London/ Hybrid
Mission:
Anthropic is an AI safety and research company focused on building reliable, interpretable, and steerable AI systems. The company's mission is to ensure that AI is safe and beneficial for customers and society at large.
What You'll Do:
Develop monitoring systems to detect unwanted behaviors from our API partners and potentially take automated enforcement actions; surface these in internal dashboards to analysts for manual review
Build abuse detection mechanisms
Surface abuse patterns to our research teams to harden models at the training stage
Build multi-layered defenses for real-time improvement of safety mechanisms
Analyze user reports of inappropriate content or accounts and build machine learning models to detect similar instances proactively
In-demand Skills:
Relevant Experience: 2-3 years in roles such as data analyst, data scientist, or software engineer, with a preference for experience in security, spam, fraud, or abuse detection.
Technical Proficiency: Skilled in SQL, Python, and various data analysis/data mining tools, capable of handling complex data-related tasks efficiently.
Communication: Excellent communication abilities, particularly in demystifying complex technical concepts for non-technical stakeholders.
Desirable Qualities:
Trust and Safety Experience: Hands-on experience in creating trust and safety mechanisms within AI/ML systems, such as developing fraud detection models or security monitoring tools.
Machine Learning Frameworks: Familiarity with machine learning frameworks like Scikit-Learn, TensorFlow, or PyTorch, enhancing the candidate's ability to work on sophisticated AI/ML projects.
Company Culture:
Anthropic is an unusually high trust environment: They assume good faith, disagree kindly, and prioritize honesty. They expect emotional maturity and intellectual openness.

with DeepMind
(0-2 yrs experience), Mountain View CA, USA & London, UK
Mission:
DeepMind’s mission is to build AI responsibly to benefit humanity
What You'll Do:
Identify and investigate possible failure modes for foundation models, ranging from sociotechnical harms (e.g. fairness, misinformation) to misuse (e.g. weapons development, criminal activity) to loss of control (e.g. high-stakes failures, rogue AI).
Develop and implement technical approaches to mitigate these risks, such as benchmarking and evaluations, dataset design, scalable oversight, interpretability, adversarial robustness, monitoring, and more, in coordination with the team’s broader technical agenda
Build infrastructure that accelerates research velocity by enabling fast experimentation on foundation models, and easy logging and analysis of experimental results.
Collaborate with other internal teams to ensure that Google DeepMind AI systems and products (e.g. Gemini) are informed by and adhere to the most advanced safety research and protocol
In-demand Skills:
You have at least a year of experience working with deep learning and/or foundation models (whether from industry, academia, coursework, or personal projects).
Your knowledge of mathematics, statistics and machine learning concepts enables you to understand research papers in the field.
You are adept at building codebases that support machine learning at scale. You are familiar with ML / scientific libraries (e.g. JAX, TensorFlow, PyTorch, Numpy, Pandas), distributed computation, and large scale system design.
Company Culture:
Collaboration is key to DeepMind's success, with a focus on teamwork, open communication, and sharing knowledge to achieve common goals. DeepMind is committed to ethical and responsible AI development, with a strong emphasis on transparency, accountability, and respect for privacy. Passionate about AI and its potential for positive impact, with a specific interest in the challenges of AI safety and alignment.

with Centre for the Governance of AI
(2 yrs experience), London, UK
Mission:
GovAI's mission is to build a global research community dedicated to helping humanity navigate the transition to a world with advanced AI.
What You'll Do:
Facilitating GovAI’s policy engagement:
- Organise events aimed at UK policymakers (roundtables, breakfasts, etc.), possibly with help from contractors.
- Help the team prepare for various policy engagement opportunities, e.g. producing memos and summaries of our work.
- Support the development and execution of policy engagement plans, primarily in the UK, but also in the US and the EU.
Boosting the Policy Team’s research output:
- Maintain a clear overview of the team’s research projects.
- Support research projects, including sourcing external feedback and copy-editing support.
- Improve our research communication pipeline, including writing opinion pieces and blog posts based on published research.
In-demand Skills:
Highly organized and competent at project management
Successful candidates will actively seek out feedback and opportunities to improve their skills.
Comfortable with working as part of a fast-moving team and working under pressure. Able to take high-level directions from our Head of Policy and other team members in a collaborative working relationship.
Excellent at oral and written communication.
Knowledgeable about the field of AI governance and GovAI’s work in particular. This role will require a solid understanding of current topics in the field – like responsible scaling policies, capabilities evaluations, and compute governance – as well as at least some familiarity with recent major policy developments in AI governance, like the Biden-Administration Executive Order, the UK AI Safety Summit, and the EU AI Act.
Company Culture:
GovAI promotes a collaborative, impactful, and research-driven culture, aiming to shape the future of AI governance through a combination of deep expertise, strategic advisory roles, and a commitment to developing practical, forward-looking policy solutions.

Thank you for reading!
Do you know of any learning pathways, communities, or resources focused on humane tech? Reach out by replying to this newsletter, or via direct message on Linkedin.