Meta’s FAIR division defends open approach to AI models despite criticism
Meta’s FAIR team stands firm on releasing AI models for free, despite criticism that their approach isn’t open enough. While Llama 2 was released relatively openly, critics argue it falls short of true open source. Joelle Pineau, FAIR’s head, says it’s a necessary balance between sharing knowledge and business interests.
Meta’s AI research division, Fundamental AI Research (FAIR), is committed to continuing its practice of offering AI models for free, despite facing criticism that its approach lacks true openness. In July, they made their substantial language model, Llama 2, relatively open and free to access. However, some argue that Meta’s licensing falls short of meeting genuine open-source standards. Meta’s restrictions include imposing a license fee for developers with over 700 million daily users and prohibiting other models from using Llama 2 for training.
Joelle Pineau, the head of FAIR and Meta’s Vice President for AI research, acknowledges these limitations but contends that they strike a necessary balance between sharing information and addressing business concerns. The extent of their releases depends on factors like the safety of the code in the hands of external developers. Pineau underscores the significance of diverse feedback and collaborative efforts in the development of generative AI models.
Why does this matter?
Meta’s approach to openness sets it apart from other major AI companies, but it also believes that collaborating with industry groups is the way to foster safe and responsible discussions about AI within the open-source community.
The debate about Llama 2’s open-source status persists, with the broader AI community exploring new licensing schemes to better suit the unique characteristics of AI models, including extensive training datasets and potential copyright infringement risks. The Open Source Initiative (OSI) is in the process of reevaluating how licenses can be adapted for AI models while upholding essential open-source principles. Additionally, the Stanford report emphasises the importance of AI developers acknowledging potential risks and establishing channels for feedback, a crucial aspect of open source.