These days it looks like every new startup, service or product needs to be AI-related. In a way, this makes sense, since AI (LLMs in particular) is without a doubt the most important and transformative technology to appear in recent years. However, in most cases it feels like these products come from a decision to offer something AI-related first, while solving an actual real problem or being useful comes as an afterthought. The typical situation of a solution looking for a problem. Of course, in this environment, using the newest and flashiest LLM is of prime importance.
Of course, most of these companies or products will be short-lived. However, those that survive will necessarily not be based solely on showcasing a technological gimmick; they will have to deliver a legitimate value-adding product, which incidentally will make use of an LLM to deliver that value.
Successful products will use AI as one of the ingredients in their recipe for success. But for the product to be successful, all of its components need to be fine-tuned to work in unison, and that includes a specific LLM version. Once the product has been crafted, tested and delivered to users, it's not trivial to switch to a new LLM (even if it's a more advanced version of the same one) and expect the product to continue working in the same successful way.
However, if the product is based on a closed model, it will always be at the whim of the model provider who can discontinue or modify it. Even using third parties to host the models, like Amazon Bedrock, does not guarantee that the LLM we are using will still be available in 6 months.
This is why I wouldn't consider building a real project on top of any model that isn't open. The latest frontier models are exciting and great for experimentation and personal use, but I see the lack of control of using a closed model as too much of a risk for any serious endeavour.
[link] [comments]