IND

Trending Now

Technology

Now, OpenAI ChatGPT remembers details to deliver specific experiences.

Except for users in Europe and Korea, the Memory feature, which was first announced in February, is now accessible to all OpenAI ChatGPT Plus paying users worldwide.

All ChatGPT Plus users can now access OpenAI’s Memory feature, following a restricted deployment in February. OpenAI said that “Memory is now available to all ChatGPT Plus users except those in Europe or Korea” in an update to the “Memory for ChatGPT” blog.
Memory in ChatGPT enables the AI chatbot to either retain or lose important commands and information that the user provides in prompts. According to OpenAI, ChatGPT Plus subscribers will be able to personalize their interactions with the chatbot and do away with the need to repeat information thanks to this functionality.

Why does Tesla value China more than India?

In February, OpenAI said that the user will get full control over what the AI chatbot should remember from their conversation. Additionally, the company said that users will get the ability to delete a specific memory under the new “Manage Memory” section in settings alongside an option to disable the feature completely.

With a wider rollout now, OpenAI has made it easier to access and control memory. In the blog post, the company said that ChatGPT will let the user know when the memory has been updated. The user can then hover over the “Memory updated” message and select the “Manage Memory” option to review and change the saved details.

Delhi Mumps Epidemic: Take These Preventive Steps To Stay Safe.

What is Memory for ChatGPT

While using ChatGPT for conversation or assistance, users can ask the chatbot to remember specific details or provide it with instructions that it needs to follow when performing specific tasks in the future. With the Memory feature enabled, ChatGPT will be able to pick up details itself and improve accordingly over time. For example, if you ask ChatGPT to summarise texts in limited words and provide bullet points at the end, it will provide a summary in the requested format every time you ask it to do so in the future.

It should be noted that memories stored with ChatGPT are not restricted to a single conversation and deleting a chat does not erase the memory. If the user wishes to have a conversation without using memories and does not want to disable the feature or erase memories, it can be done on a temporary chat, which is accessible through a drop-down menu from the top of the screen.

AI increasing impact on journalism: Check risks, the most current developments, and more.

Artificial intelligence: Google is testing a product called “Genesis,” which is a technology they developed, with publishers.

As per media analyst David Caswell, artificial intelligence is causing “a fundamental change in the news ecosystem” in the short term and is already upending journalism, according to AFP.

Speaking at a gathering of business executives in the Italian city of Perugia to address the most pressing issues affecting their industry, Caswell was formerly employed by Yahoo! and the innovation division of the British broadcaster, BBC News Labs.

How do you see the journalism of the future?

“We’re not sure. However, our goal is to comprehend every possibility—or as many as possible—that may exist. However, I believe that a few things are starting to become apparent. First, there’s a good chance that more material will be produced, originated, and sourced by machines. Thus, machines will gather more information for a great deal of journalism, produce more text, audio, and video, and create the kinds of consumption experiences that people have.

That is a fundamental shift in the news ecosystem specifically, as well as the information ecosystem as a whole. This is structurally distinct from the one we currently inhabit. We have no idea how long it will take; it may take two, four, or seven years. As there isn’t much friction, I believe it will go more quickly.

News devices, new technology, large sums of money for producers, and technical know-how are not necessities for the general public. Thanks to generative AI, everything that constituted a barrier in the earlier generation of AI is no longer so.

The most costly wedding in India cost more than the unions of Isha and Akash Ambani combined; the total amount spent was…

What are the latest developments underway in newsrooms?

“New technologies that facilitate AI workflow are one type of development; JP Politikens in Denmark, for instance, concentrated on increasing the effectiveness of their current products and operations. However, it also serves as a foundation for bringing their personnel, operations, and goods into this new AI world.

Google is testing a product called “Genesis,” which is a technology they developed, with publishers. Some publishers are creating their own. These utilities will be available in platform versions.

These are instruments that you bring with you to gather news; on the left side are your transcripts, PDFs, audio files, movies, etc. It facilitates tasks like analysis, summaries, script writing, and audio production. The tool is in charge of them.

The journalist’s responsibilities include organizing the tool, editing, and thorough content verification. Using the tool—like an editorial manager for an AI tool—becomes the job.

In theory, it functions. However, that’s not the same as using it day in and day out, month in and month out, in a newsroom within a vast business. The key question is: will it be eagerly embraced and utilized in a way that ultimately yields little productivity, or will it significantly increase journalistic production?”

IPL 2024: Gujarat Titans win low-scoring thriller in Mullanpur, hand Punjab Kings fourth straight loss

What is the cost?

“In the last decade, it was very expensive. It was very difficult: You needed the data, you had to build a data warehouse, have an enterprise deal with Amazon or Google Cloud, you had to hire data scientists, to have a team of data engineers. it was a major investment. Only the BBC, the New York Times, and this level of organizations could afford it.

That’s not true with generative AI. You can run news workflow through interfaces that you pay 20 dollars a month. You don’t need to be a coder. All you need is motivation, enthusiasm, and curiosity.

There are lots of people in news organizations who would not have been involved in AI in the past because they did not have the technical background and now they can just use it. It’s a much more open form of AI: both smaller newsrooms can do a lot with, and more junior individuals in more established newsrooms can do a lot with. I think it’s a good thing, but it’s also a disruptive thing. Often the internal politics in newsrooms are disrupted by that”.

At what stage of AI are we?

“AI has been around since the 1950s. But AI for practical purposes appeared with ChatGPT. It’s going to be quite a while — years — before we understand how to use them for valuable things. There are so many things that you can do with them.

The risk to journalism is that other organizations, start-ups, and tech companies will do things in the news faster than the news world itself. Lots of start-ups have no editorial component at all. They are swiping the content of news organizations, some are covering niches: they are monitoring press releases, social media channels, PDF from reports”.

What are the risks?

“Journalism has not been doing well for the last 10 or 15 years, there hasn’t been a credible vision of the future for how this is going to play out just in the social media world. What AI does (is) it gives news organizations a chance to change that situation, to participate in a new ecosystem. It’s good to be optimistic, get engaged, explore, have projects, and experiments, and maybe change your mindset, that’s positive.

As Jelani Cobb, Dean of Columbia School of Journalism, says: ‘ AI is the unignorable force that journalism will have to organize itself around’. It’s not going to adapt itself to journalism.

Startup Mahakumbh Event 2024 LIVE: PM Modi’s ‘startup’ dig at Rahul Gandhi

PM Modi will address thousands of prospective entrepreneurs, investors, and business visitors at the event.

PM Modi

On Wednesday morning in 2024, Prime Minister Narendra Modi is expected to address at Startup Mahakumbh Event 2024. Thousands of prospective investors, business owners, and guests will be able to watch the event live via streaming. His address will center on the government’s initiatives to promote emerging sectors such as deeptech, agritech, biotech, medtech, and AI.

From March 18–20, Bharat Mandapam in New Delhi will host the Startup Mahakumbh event 2024. Two of the leading trade associations, the Indian Venture and Alternate Capital Association (IVCA) and the Bootstrap Incubation & Advisory Foundation, are working together to organize the event. It is supported by the Department for Promotion of Industry and Internal Trade (DPIIT).

Piyush Goyal, the minister of commerce and industry, highlighted the event last month, stressing how Indian entrepreneurs are reshaping industries and becoming the backbone of the country’s economy. He emphasized that the connection between an ambitious India and the startup ecosystem will drive economic growth in the years leading up to India’s 100th anniversary of independence, or Amrit Kaal, and help make the country developed by 2047.

The commerce and industry ministry said in a statement on Tuesday that in addition to hosting over 2,000 startups, over 1,000 investors, over 100 unicorns, 300 incubators and accelerators, 3,000 delegates, 3,000 future entrepreneurs, and over 50,000 business visitors from across the country, the event has so far attracted participation from notable investors, innovators, and aspiring entrepreneurs.

Also Read | Abraham Ozler OTT release today: Know Where To Stream This Crime Thriller Online!

This Article is originally Published by Hindustan Times

OpenAI Sora creates videos: What is it, how you can use it, is it available and other questions answered

OpenAI has generated significant excitement with the unveiling of its latest AI model, Sora, capable of producing minute-long videos from text prompts. While the results are intriguing, many questions remain unanswered. In this article, we address all of them.

OpenAI Sora

In Short

  • OpenAI unveils Sora, a new AI model that creates realistic videos from text prompts.
  • Sora builds on DALL·E and GPT models, can animate static images and generate complex scenes.
  • Currently, Sora is accessible only to red team members and select artists for feedback.

OpenAI, the company behind ChatGPT, created some big waves on the internet on Friday as it unveiled its new AI model, Sora, which can create minute-long videos from text prompts. But why is Sora creating such a buzz when there are so many other AI tools available that do the same? It is because of how well Sora is able to do it –– or so it seems from the results that have been shared by Open AI Sam Altman and other limited testers of the AI model. So far, the videos generated by Sora that we have been seeing are super-realistic and detailed.

“We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction,” says OpenAI in a Sora blog post.

What is OpenAI Sora?

Sora is an AI model developed by OpenAI –– built on past research in DALL·E and GPT models –– and is capable of generating videos based on text instructions, and can also animate a static image, transforming it into a dynamic video presentation. Sora can create full videos in one go or add more to already created videos to make them longer. It can produce videos up to one minute in duration, ensuring high visual quality and accuracy.

OpenAI says Sora can generate complex scenes with various characters, precise actions, and detailed backgrounds. Not only does the model understand the user’s instructions, but it also interprets how these elements would appear in real-life situations.

“The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions. Sora can also create multiple shots within a single generated video that accurately persist characters and visual style,” OpenAI said in a blog post.

Also read: Honor X9b review: A mid-range smartphone with ‘super’ display!

Is it available, and how can you use it?

Sora is currently only accessible for red team members –– experts in areas such as misinformation, hateful content, and bias –– to examine critical areas for potential problems or risks. Additionally, OpenAI is granting access to visual artists, designers, and filmmakers to collect feedback on enhancing the model. However, the company definitely has the intention to make the model available to all users eventually. A statement from the blog reads, “We’re sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon.”

Is OpenAI Sora safe?

OpenAI has addressed the elephant in the room. If Sora is able to generate videos that are so realistic, is it safe to rollout to the public?

OpenAI says that it plans to implement several important safety measures before integrating Sora into OpenAI’s products. This involves working closely with red teamers, who are experts in fields like misinformation, hateful content, and bias. They will rigorously test the model to uncover potential weaknesses. OpenAI will also create tools to identify misleading content, such as a detection classifier capable of recognising videos created by Sora.

Additionally, OpenAI will adapt existing safety procedures developed for products like DALL·E 3, which are relevant for Sora. For example, their text classifier will screen and reject input requests that violate usage policies, such as those containing extreme violence, sexual content, or hateful imagery. The company has established robust image classifiers to review every frame of generated videos to ensure compliance with usage policies before user access.

OpenAI is also actively collaborating with policymakers, educators, and artists worldwide to address concerns and explore the positive applications of this new technology.

“We’ll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology. Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time,” OpenAI said in a blog post about Sora.

iQOO executive admits that the Neo 9 Pro prioritizes camera quality above lens count as launch draws near!

iQOO prioritises camera quality in Neo 9 Pro with Sony IMX920 sensor, VCS colour grading, and AI night mode. Fewer cameras but better performance promised.

iQOO

As iQOO gears up to launch its next performance-oriented mid-range smartphone – the company is emphasising the camera capabilities of the device. This isn’t the case with many smartphones in this range. IndianExpress.com got in touch with Shankar Singh Chauhan, product manager at iQOO India, to understand more about the camera prowess of the Neo 9 Pro.

Chauhan said, “This time we have significant improvement in the camera performance of the iQOO Neo 9 Pro compared to its predecessor, especially in the main camera, which now uses the Sony IMX920 sensor, a sensor used on the flagship Vivo X100.” He also confirmed that the iQOO Neo 9 Pro will be priced under Rs 40,000, which will make it one of the most affordable Snapdragon 8 Gen 2 SoC-powered smartphones in the country.

Also Read: An important test for the Marines’ underwater missile-delivery drone occurs this month!

When asked why the iQOO Neo 9 Pro has fewer cameras compared to its predecessor, Chauhan said, “according to the feedback we received, consumers didn’t want more low-quality cameras like the 2 MP macro. Hence, this time, we have reduced both the size of the camera module and the number of cameras, and this Phone is only includes a main camera and an ultra-wide-angle lens.” Last year’s iQOO Neo 7 Technology Pro had a triple camera setup with a 2 MP macro lens.

Talking about some of the prominent camera features of iQOO Neo 9 Pro, the spokesperson highlighted that the device comes with both hardware and software camera enhancements, which include Vivo’s VCS colour grading technology that reduces the noise and offers more natural-looking colours. The device also has AI-powered night mode, which helps capture clearer skies even in low-light situations.

Chauhan added that the iQOO Neo 9 Pro, with its dedicated Supercomputing Chip Q1, enables experiences like 144fps gaming through the game frame interpolation technique on select games, and it also enhances the gaming resolution to 900p.

This Phone is launching in India on February 22, and the device will compete against other performance-oriented smartphones like the recently launched OnePlus 12R (review), which is also based on the same Snapdragon 8 Gen 2 SoC. This is already available for pre-order on Amazon, and the listing confirms that the smartphone will be available in at least two colour variants.

Other prominent features include a dual-tone design, Sony IMX920 sensor-powered primary camera, 5,160 mAh battery with support for 120W fast charging, 6.78-inch FHD+ 144Hz LTPO display with up to 3,000 nits of peak brightness, and up to 12 GB of RAM with 256 GB of internal storage.

Also Read: Court asks govt to retest samples of Ghaziabad-based firm’s polio Vaccine!

This article was originally published on indianexpress!

JEE Mains 2024 session 1 Result today on jeemain.nta.ac.in, where & how to check!

JEE Main 2024 Session 1 Result: This date was mentioned on the information bulletin of the entrance examination.

JEE Mains 2024 result

JEE Mains 2024 Session 1 Result: The National Testing Agency (NTA) will announce the results of the Joint Entrance Examination (JEE Main 2024) session 1 today, February 12. This date was mentioned on the information bulletin of the entrance examination. Candidates can check their scores on the official website when announced. JEE Main 2024 session 1 result live updates.

Candidates have to use their application number and date of birth to download JEE Main scorecards.

JEE Main 2024 result date for session 1: February 12

Result time: Not confirmed

Official websites: jeemain.nta.ac.in, nta.ac.in, ntaresults.nic.in

Login credentials: Application number and date of birth.

How to Check JEE Main 2024 session 1 result

  1. Go to jeemain.nta.ac.in.
  2. Open the JEE Main 2024 session 1 scorecard download link.
  3. Enter your application number, date of birth and log in.
  4. Check your result.

The first session of JEE Main 2024 was conducted on January 24, 27, 29, 30, 31 and February 1.

On the first day of the examination, the BArch and BPlanning (paper 2) examination took place during the second shift, while the BE/BTech (paper 1) examination was conducted on all the other days and in two shifts.

A total of 12,31,874 candidates were registered for both papers of JEE Mains, of whom 11,70,036 took the test, NTA has informed.

NTA is expected to announce paper 1 results first, followed by paper 2.

Ahead of results, NTA released the provisional answer key of the examination was issued on February 6 and invited objections between February 7 and 9. The final answer key is also awaited.

All India ranks of JEE Main 2024 will be announced during the final results, after the session 2 examination is over.

JEE Main is held for admission to undergraduate Engineering, Architecture, Planning courses offered by National Institutes of Technology (NITs), Indian Institutes of Information Technology (IIITs) and other participating institutions.

The top 2.5 lakh candidates from various categories will also be eligible to appear in the JEE Advanced examination.

This Article was originally published on Hindustantimes News!

Also Read : Ahead of farmers’ march on Feb 13, Section 144 enforced on Delhi-UP borders!