
Spanish blogger Mary Kaldas and her husband Alejandro Sid were unable to go on their planned vacation. This was reported by Zamin.uz.
This situation arose due to an incorrect answer from the artificial intelligence they trusted — ChatGPT. The couple intended to travel to the US territory of Puerto Rico.
Before booking flights, hotels, and events, they asked the neural network whether additional documents were required. ChatGPT told them that no visa was needed but did not provide information about the ESTA permit required for entry into the US and its territories.
The travelers only found out about this at the airport during check-in. Staff did not allow them to board the flight due to the absence of their ESTA documents.
Mary wrote on her blog, “I always do a lot of research, but this time I asked ChatGPT and it said no.” The video the blogger posted about this unfortunate incident quickly spread.
User opinions were divided into two opposing sides: some sympathized with the couple, while others criticized their irresponsibility in organizing the trip. Kaldas joked that the chatbot might have “taken revenge out of spite”: “Sometimes I insult it, telling it it’s unnecessary.
But I ask it to provide good information... maybe that’s why it took revenge on us.”
This incident once again demonstrated the need for caution when using artificial intelligence and the importance of verifying information from additional sources.