This study performs a systematic content analysis to map out the landscape of eighty-seven ethical tools concerning the type of ethical principles addressed, the approach promoted (technological vs. socio-technical vs. societal), and the kind of assessment endorsed (self-assessment versus assessment by others). The findings reveal a skewed representation and emphasis of specific ethical principles across these tools. Furthermore, ethical tools primarily advocate for a technological or socio-technical approach. In contrast, only a few ethical tools emphasize the importance of a societal approach. Additionally, the research highlights a reliance on self-assessments within these tools, raising concerns about the objectivity and comprehensiveness of ethical evaluations in AI development. Our findings add empirical support to recent AI and ethics research advocating for bottom-up approaches that embrace diverse perspectives, challenging the dominance of broadly stated ethical principles, diagnostic tools, and an overreliance on self-assessments.