Loyae: Proactive AI Regulations Are Dangerous


We're Going To Be Okay


Author: John Lins, Date: 5/11/2023


Launch Free Diagnostic Tool


As the world gains exposure to powerful AI models, panic might lead us to do somewhat logical but completely irrational things. Talk of AI regulations has been a hot topic recently, and it seems logical from a surface-level perspective, but it simply isn’t. While the ethical concerns surrounding AI come from genuine compassion and concern for humanity, we must still account for the possible detrimental side effects of such regulations.

The past few months have given rise to major (and somewhat scary) innovations in autoregressive large language models. These major technological strides forward have greatly alarmed the average person, who understandably assumes that the compounding growth in the field will continue at its current pace. In reality, the growth we see is logistic, not exponential–it will quickly hit a wall of diminishing returns. The reason is that recent innovations in AI have emerged from optimizing transformer-based models (a powerful AI architecture that leverages multi-head attention and positional encoding), but breakthroughs like this are extremely rare, and another one likely won’t come along anytime soon. Even the transformers themselves can not be scaled indefinitely just by increasing the number of parameters; the improvements in accuracy are only marginal beyond a certain point.

Despite the humbling reality, multiple groups and prominent individuals have called for proactive regulations to address the potential economic and ethical risks of big AI. While regulating AI seems trivial, the unintended consequences could be devastating.

One significant concern is that regulating AI could inadvertently favor large tech corporations, granting them a monopoly over AI development and deployment while hindering the open-source community and smaller startups such as Loyae.

The implementation of strict AI regulations would necessitate the establishment of ethics teams and compliance departments to ensure adherence. Large tech corporations, with their vast resources and established infrastructure, would have the means to comply with these regulations. They could hire specialized teams to navigate the regulatory landscape effectively. On the other hand, smaller startups and open-source projects have limited resources and would fail to comply with the new regulations. This disparity would disproportionately benefit larger companies that have more capital to burn.

AI-based startups would then be forced to integrate with the vetted AI models from large companies via APIs. This dependence will incentivize the party who owns the model to charge exorbitant amounts per request. This monopolization will make AI considerably less accessible to people and companies with less money to burn–causing unnatural inequality.

The US government is infamously slow, but when it comes to the panicked calls for AI regulation, its unresponsively is actually a positive. It is crucial that we put our nerves at ease, take a seat, and weather the AI storm for a few more months before analyzing the situation and addressing the outcome accordingly. In the meantime, we must rely on each AI initiative’s PR incentive to deploy models which are as ethical as realistically possible.

Everything will be just fine, I promise.