Humanothon is a human, world, change-aware ‘marathon’ – a thematic movement – for sense making and building shared understanding.
Date: 9th November 2023
Time: 4 pm Helsinki | 3 pm Paris | 2 pm London | 9 am NYC
Duration: 2 hours
Theme: AI regulation, keep it safe and understandable.
In our first Humanothon with Hunome on ‘AI regulation, keep it safe and understandable’, we brought together people with different backgrounds and knowledge to make sense of how AI on its rapid course of evolution should be regulated. The results were fascinating.
This session 2 aims to bring more stakeholders with different skill sets into the Humanothon process and the SparkMap. We are encouraging the field experts and enthusiasts to join this second Humanothon session to continue to build common understanding on how we should regulate AI to make regulations future proof for ever evolving technology and helpful to utilize the power of AI applications for common good while limiting abuses and harmful applications.
This build is led by Linards Kalvāns, data practitioner and AI enthusiast.
Join us to explore this complex theme!
Intent of the Build: To bring more stakeholders with different skill sets into the Humanothon process, continue to encourage sparkons.
Build Approach: Humanothon program session 2 and asynchronous contributions in the Hunome AI Regulation SparkMap.
Status of the Build – Ongoing
A few highlights from the trains of thought from Session 1:
You can also login to explore the evolving SparkMap – a systemic view to the theme. Humanothon is a continuous build and we work towards a synthesis
How do we recognize when a decision made by AI needs to be restricted as posing unacceptable risk? Joel Pyykkö, an AI researcher from Helsinki elaborates that we already see possible problems if AIs are left to work off of their own conclusions in the form of ‘hallucinations’. One classification schema for regulation could be between AIs that are fully automated and ones that require the human to operate it at every step. A question for governance. There was some deliberation around what is human-in-the loop: when AI is designed to help vs when humans are there to oversee it, there is a difference.
International compatibility of AI regulations
Gunta Krumina, a strategist, innovation consultant and systems thinking advocate points out how AI regulations should be aligned at the international level. International agreements are much more complex to agree on than legislation/regulations within a single country (even though within a single country it is never an easy and straightforward process). Therefore, we should think of and find new ways of negotiations and consensus on common points about AI regulations on issues crucial for the future of humanity.
Besir Wrayet, a crisis management expert raises the concerns of industry self regulation. Big tech companies have agreed for self-regulation, but will it be efficient? How effective can the self-regulations agreed by an industry be? New players will emerge and won’t have obligations to go with goodwill rules defined by the old dinosaurs. We’ve seen how self regulation has failed for example in the finance sector. Others in the SparkMap continue to ponder whether we can have shared self regulation when AI cuts across all industries with many applications and variations. One industry might perhaps regulate its players, at least to an extent, but how to cover all industries effectively?
Linards Kalvāns is a data practitioner and enthusiast with hands-on experience in developing, implementing and maintaining range of machine learning applications. Like many others working with the AI from inside he’s aware and curious about threats that AI causes and will cause in future and is willing to explore and find ways to mitigate it.
- During the registration process, please specify “AI Regulations” in the field ‘how did you hear about us’.
- You will receive the Zoom session link on the email you use to register to Hunome.
- If you have already registered let us know at hello (at) hunome (dot) com that you are interested in joining this session.
- Please note that the ‘add to calendar’ item at the end of this page only pencils it in for you but is not known to us.
- Once you have registered to Hunome you can find the kick-off and evolving SparkMap here.
We look forward to mingling with you in this thematic build of understanding.
Hunome is a product uniquely designed for building multidimensional understanding. This is the place for thinking and building knowledge together or solo. A home for the curious and sense makers. A SparkMap is a non-linear, evolving and multi-dimensional map of understanding on a theme. SparkMap is a great way to visualize the understanding that a group of people have on a particular theme.
Humanothon is a movement or a set of thematic movements run on the Hunome product. You can take leadership on a theme that needs multidimensional understanding and that you want to build a concerted systematic mapping on, with a group. Hunome product is there to use and our team helps the leads set this up providing materials and other help.
Please note, you do not need to run a Humanothon to build multidimensional understanding on Hunome. You can also just start by clicking on ‘New SparkMap’ once inside Hunome and craft your own understanding and see others join.