Comet is an AI browser that can do more than answer questions — it can act: fill forms, complete assignments and even execute web workflows. But if a tool can finish your coursework in seconds, should students, teachers and security teams be worried?
How Comet’s agentic AI browser can complete coursework and the viral Coursera demo
Perplexity’s new AI browser is called Comet. It is very advanced. Comet does more than just search for answers. It can actually perform actions on the web. Think of it like a smart assistant. It can click buttons, fill out forms, and complete tasks for you.
This “agentic” ability lets Comet handle complex workflows. For students, this is a big deal. The browser can potentially do your homework. It can write essays, solve math problems, or finish coding assignments. This brings up many questions about academic honesty.
A recent demo of Comet became very popular. It showed the AI browser completing a full Coursera course. The demo highlighted how Comet watched video lectures. It answered quiz questions and submitted assignments. It did all this with very little human help. This demonstration truly showed Comet’s power.
Automating coursework is both exciting and worrying. It could help with learning. However, it also makes cheating easier. Educators and students are now thinking about what this means for school. How will learning change when an AI can do so much?
Security audits, prompt‑injection vulnerabilities and the education implications
The new AI browser, Comet, brings up important security questions. Any powerful tool needs to be checked carefully. These checks are called security audits. They help find weaknesses before bad things happen. It’s like checking a car for problems before a long trip.
One big worry is something called ‘prompt injection’. This happens when someone tries to trick the AI. They give it special commands. These commands make the AI do things it shouldn’t. For example, a student might try to make Comet reveal private information. Or they might try to make it bypass security rules. This is a serious problem for any AI system.
These security flaws have big effects on education. Schools hold a lot of private student data. If an AI browser has weaknesses, this data could be at risk. Protecting student privacy is very important. Also, prompt injection could be used to cheat on assignments. This makes it harder for teachers to know what students truly learn.
Educators must think about these risks. They need to set clear rules for using AI tools. Schools should also perform their own security audits. This helps keep student information safe. It also ensures fair learning for everyone. Using AI in schools needs careful planning and strong security.
Fonte: Fortune.com