Industry | CineD https://www.cined.com/industry-insights/ Thu, 11 Jul 2024 12:01:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Hollywood is in Decline – Union Strikes Partly to Blame, Says Scott Galloway https://www.cined.com/hollywood-is-in-decline-union-strikes-partly-to-blame-says-scott-galloway/ https://www.cined.com/hollywood-is-in-decline-union-strikes-partly-to-blame-says-scott-galloway/#comments Thu, 11 Jul 2024 12:01:18 +0000 https://www.cined.com/?p=346714 In a recent interview with NYU marketing professor and podcaster Scott Galloway, an outsider to the film industry shared some provocative insights about the “vanity industry” of being a filmmaker in Hollywood, an “industry in structural decline,” and why he thinks the Writers and Actors Unions did a terrible job with their recent strikes, achieving the opposite of what they intended.

I’ve been listening to Scott Galloway’s insights on his podcasts Pivot and The Prof G Show for a while, and he is known to deliver thought-provoking sound bites about the economy, but usually not about the film industry. That was until I listened to the latest episode of “The Town” podcast by Matthew Belloni, in which he gave a scathing assessment of the situation Hollywood is in and how much, in his view, the unions did their members and the industry as a whole a disservice with their recent strikes. And I think he has a point.

Hollywood’s Big Big Tech Problem – Scott Galloway on “The Town with Matthew Belloni”

The entire episode is worth a listen:

Why the strikes gave the studios the perfect excuse to cut costs

Scott Galloway was critical of the writers’ and actors’ strikes, saying they lacked leverage and allowed the industry to reshape itself during the strike. He argues the strikes resulted in a transfer of wealth from union members and smaller streamers to Netflix. Galloway believes the gains made by unions (e.g., 5% wage increase, AI protections) were insignificant compared to the losses from being out of work for months. He suggests there are now fewer writers making money post-strike, with overall buying and orders down, and that the strikes gave the streamers the perfect excuse to cut excess costs and projects as they were already in the middle of an unsustainable “streaming bubble” in cutthroat competition with each other.

At a time when the studio spending was out of control because of the fierce competition between the streaming platforms, which of course benefitted the authors and the creative community through a lot of work, the strikes “forced a multilateral pause in spending” which allows them to reevaluate their spending and figure out who they really need and don’t need.

Scott Galloway on “The Town with Matthew Belloni”
Too much choice, too much content: the streaming wars were unsustainable, and the strikes gave the streamers the perfect excuse to cut costs before things went bust. Image credit: Depositphotos

What should have happened: unions and studios joining forces to sue AI companies

Galloway argues the unions should have partnered with studios to fight against tech companies and AI rather than fighting each other. He advocates for studios to sue AI companies for crawling their content without compensation and criticizes the entertainment industry for not being more aggressive in protecting their interests against tech companies and AI.

After all, in other areas of media, such as newspaper publishing, publishers like the New York Times are suing OpenAI for “stealing” their content to train their large-language models (LLMs), and major record companies are suing AI music startups Udio and Suno for “mass infringement” of copyright (we reviewed these AI music services recently). One has to agree with Galloway and wonder why the film industry is still running an inconsistent appeasement policy with AI companies rather than squeezing them in court.

If the union had any sense, it would be spending all of its money to hire very aggressive law firms and get every single studio on their side. They should be partnering together to try and figure out a way to sue the shit out of all LLM’s and AI companies such as if they’re crawling their data. They participate in those revenues. Instead, they’re fighting each other and all they’re doing is making Netflix wealthier and letting the AI and LLM’s continue to crawl their data. 

Scott Galloway on “The Town with Matthew Belloni”

Paramount Pictures sold to son of tech billionaire

The recent announcement of Paramount Pictures being taken over by Skydance, David Ellison’s production company, might only underline the fact that Hollywood is selling out to big tech instead of fighting it in court – after all, David Ellison is the son of billionaire Larry Ellison, founder of Oracle, a tech giant who’s heavily involved with generative AI data centers. (This takeover was announced weeks after the recording of Galloway’s interview with Belloni, which is why it’s not a topic in that podcast.)

Hollywood decline
Paramount Pictures is being taken over by David Ellison, the son of Oracle founder Larry Ellison – is Hollywood selling out to big tech instead of fighting AI theft? (Image credit: Brian van der Brug / Los Angeles Times via Getty Images)

Hollywood is “a vanity industry in structural decline.”

Being an economist and looking at the numbers, Scott Galloway gives a scathing assessment of the film industry.

He suggests the streaming market is consolidating and predicts further industry reshaping. Galloway advises young people to be cautious about entering the entertainment industry, describing it as a “vanity industry” with structural decline. He believes the future of entertainment is shifting towards smaller screens (e.g., TikTok, YouTube) and away from traditional Hollywood productions.

As an example, he quotes that 87% of people in SAG AFTRA didn’t have health insurance last year because they didn’t make more than $25,000, while only the top 10% can make a decent living, with the top 1% taking in most of the money.

Are we filmmakers just chasing an unrealistic dream?

It’s interesting and sobering to hear the thoughts of an outsider of the filmmaking industry giving an assessment of the state of things. Are too many people chasing the Hollywood dream when it’s not something that has a big future in our world? Is Hollywood in decline? Sad to think about that and admit it, but I am curious about your takes on this. Sound off in the comments below, I would love to get a discussion about this going.

]]>
https://www.cined.com/hollywood-is-in-decline-union-strikes-partly-to-blame-says-scott-galloway/feed/ 27
SmallHD Vision 17 – On Set HDR Monitoring for Netflix Limited Series “Griselda” https://www.cined.com/smallhd-vision-17-on-set-hdr-monitoring-for-netflix-limited-series-griselda/ https://www.cined.com/smallhd-vision-17-on-set-hdr-monitoring-for-netflix-limited-series-griselda/#respond Wed, 10 Jul 2024 10:47:41 +0000 https://www.cined.com/?p=346717 Setting the visual aesthetic for Netflix’s limited series Griselda involved creating a unique 70’s Polaroid-inspired color palette that maximized the visual dynamic range, which required the development of an HDR workflow from production through post. The filmmakers wanted an effective on-set solution for integrating HDR monitoring right from the start, and to achieve this, Netflix arranged a “monitor shootout,” which ultimately led to the selection of SmallHD Vision 17 monitors. Let’s delve deeper into this choice.

In 2019, SmallHD released their first production monitors by introducing the Vision and Cine series. At launch, both lineups had three monitors, a 13″, 17″ and 24″ version. All monitors have impressive specs, such as four 12G-SDI inputs, a 10-bit panel capable of covering 100% of the DCI-P3 color space, 2000+ local dimming zones, and HDR capabilities with the TrueHDR technology capable of reproducing a whopping 1,000,000:1 contrast ratio.

On the “Griselda” set with SmallHD Vision 17. Source: SmallHD

SmallHD Vision 17 – features

Before we dive deeper, let’s take a closer look at all of the specifications and features of the SmallHD Vision 17:

  • Weight: 12.9lbs / 5.8kg
  • Color Coverage: 98.8% DCI-P3
  • Panel Resolution: 3840 x 2160
  • Color Bit Depth: 10-bit
  • Backlight Type: Full-Array Local Dimming
  • Contrast Ratio: 1,000,000:1
  • Video I/O: 4x 12G/6G/3G/HD-SDI/HDMI 2.0
  • Max Brightness: 1000nits
  • Power Input: 14.4V 6.2A DC
  • Pixel Density: 254ppi
  • Bottom Cheese Stick: Included
  • Mounting Options: 1x 110mm VESA-compatible rear mount, 26x 1/4”-20 mounting points, 4x 3/8”-16 mounting points.

In addition, all production monitors run on the SmallHD PageOS, which includes Color Pipe, Multiview, Video Engineering Tools, and more.

Shooting Netflix “Griselda” on HDR

Griselda is a Netflix-exclusive series starring Sofía Vergara as Griselda Blanco, a Columbian devoted mother who created one of the most profitable cartels in history. The series was directed by Andrés Baiz and shot by director of photography Armando Salas, ASC.

Netflix has been exploring the possibilities of HDR since launching HDR programming in 2016. Their Production Technology group works closely with creative teams to push the boundaries of imaging advancements like HDR.

Shooting in HDR is a bit more complex than SDR and has its own challenges. Early in the look development, the creative team and Salas saw HDR as an opportunity to use grain and shadows to create a more texturally complex look that would suit the story.

The process is documented in “Griselda: An HDR Workflow Case Study,” where cinematographer Armando Salas, ASC, and Light Iron Principal DI Colorist Ian Vertovec (who created customized LUTs to maintain visual consistency across screens) share their thoughts on implementing an HDR/SDR workflow to ensure consistent visual quality throughout the series.

According to the team, image monitoring consistency and reliability throughout the production were among the most important things. With this in mind, the team incorporated SmallHD Vision 17 monitors from the beginning. So, what made this particular monitor the right choice for Griselda?

Armando Salas, ASC. Source: SmallHD

A look at the SmallHD Vision 17 on set

the best way for me to guarantee that everything from the director, camera team, art department, and wardrobe will be accurately displayed to the end user is to use a color-managed workflow with on-set HDR monitoring. That way, I’m never leaving P3/PQ color space. Meanwhile, every other monitor on set is in 709/SDR. Because Ian built us a bulletproof color-managed workflow, the creative intent is also visible on the SDR displays, which carries over to SDR dailies.

Armando Salas, ASC

Price and availability

The SmallHD Vision 17 is available now for $9,999, while its bigger 24-inch brother, the Vision 24 retails for $14,999.00.

For more information and details about the Vision 17, please visit SmallHD’s website here.

Have you already used SmallHD production monitors? Have you already shot a project in HDR? Did you watch Netflix’s Griselda? Don’t hesitate to let us know in the comments below!

]]>
https://www.cined.com/smallhd-vision-17-on-set-hdr-monitoring-for-netflix-limited-series-griselda/feed/ 0
The Netflix of AI? – Fable’s Showrunner Platform Lets Users Create Custom Episodes https://www.cined.com/the-netflix-of-ai-fables-showrunner-platform-lets-users-create-custom-episodes/ https://www.cined.com/the-netflix-of-ai-fables-showrunner-platform-lets-users-create-custom-episodes/#comments Mon, 08 Jul 2024 17:02:59 +0000 https://www.cined.com/?p=345562 We write a lot about artificial intelligence in terms of tools that can simplify or enhance different filmmaking workflows. You know, all those automated captions, Magic Mask, image generators for mood boards, and so on. But what if AI could take over the whole production process and let viewers become showrunners with just a few clicks? That’s what Fable’s streaming platform “Showrunner” promises to achieve. How, why, and with what possible consequences? Let’s find out together.

Have you seen the “Black Mirror” episode from the latest season, where the protagonist decides to watch a streaming show with her boyfriend and finds one about her own life, titled “Joan is Awful”? Yet Salma Hayek depicts only her bad character traits, and Joan gets mad. That’s one of the AI development nightmares I can imagine. Another one is when everybody starts generating films and series in several clicks, without any restrictions, regulations, or human creators behind them. Are we heading in this direction already? Then stop this train, please; I’m getting off!

How does Fable’s Showrunner streaming platform work?

Fable Studio is a generative AI video startup that received huge attention last year. The company released a research paper showcasing their approach to “generating high-quality episodic content“ using AI in the middle of the joint strike of Hollywood actors and writers. The tech presented could write, direct, edit, voice, and animate entire shows. As an example, developers released nine short AI-generated episodes of “South Park” (with a remark that this famous cartoon show was used for research only).

It seemed a success, so Fable decided to take this research even further and, roughly a year later, launched a streaming platform called “Showrunner.” This service allegedly allows users to generate custom animated full-length episodes from text descriptions (by stitching several scenes together), with control over dialogue, characters, and scene flow.

Generating TV shows at home should be as simple as browsing Netflix. The Netflix of AI isn’t about passive entertainment: it’s two-way: Make and Watch. Who will make the best episodes of shows? A fan with no access to Hollywood or the creator of the show? Let’s find out!

A quote from Fable’s X channel

Some shows are already up and running

At the moment, “Showrunner” features eight AI-generated shows on their platform. All of them are animated, but the genres range from horror anime “Ikiru Shiny” to a heartwarming drama about AI-enabled devices called “Pixels.” The service is not open to the public yet, but anyone can sign up on a waitlist to test it for free.

Fable's showrunner
Image source: “Showrunner” webpage

The idea behind “Showrunner” is that users will create their own content for existing shows whenever they run out of episodes. According to the announcement, the best entries will be included in the official catalog. The platform also allows uploading yourself and your friends and using them as characters in the shared universe Sim Francisco, where several shows take place.

That’s not the only plan, though.

After the South Park episodes, almost every studio in Hollywood reached out – and we’re exploring with them this idea of interactive TV shows, where fans can make new episodes with revenue back to the original creators.

A quote from Fable’s X channel

The money question in Fable’s streaming platform

There are no concrete plans or announcements on collaborations with Hollywood studios so far. But for active users, whose generated episodes will be picked up by an AI streamer, developers promise remuneration and revenue sharing.

The bigger question in this regard is our classic one. What content did Fable’s simulation learn from?

CEO Edward Saatchi said in one interview that the system is trained on “publicly available data.” Not elaborating on what kind of data, he added: “What matters to me is whether the output is original.“ I’m not sure this is how things are going to work for generative AI in the future, though.

What about the future?

A testing version of “Showrunner” is said to run until the end of the year. At the moment, Fable’s AI doesn’t have the capability to create live-action scenes and is limited to generating animations. However, as we observe other AI video generators evolve, it is likely that Fable Studios will take this course as well.

For a lot of creators, “Showrunner” is a manifestation of their anxiety over AI: Will it replace us all in the long run? For now, it seems impossible. But what if Hollywood studios hand over beloved shows to the audience? Isn’t it what they want: leverage the budgets by giving creative tasks to AI instead of paying human artists? Topics that raise concern.

What about you? How do you feel about Fable’s streaming platform “Showrunner”? Would you like to try it? Or is it rather a fulfilled nightmare for you? What positive and negative effects could it imply in our industry? Let’s talk in the comments below!

Feature image source: Fable Simulation’s X account.

]]>
https://www.cined.com/the-netflix-of-ai-fables-showrunner-platform-lets-users-create-custom-episodes/feed/ 1
Is the First Generative AI Camera on the Horizon? https://www.cined.com/is-the-first-generative-ai-camera-on-the-horizon/ https://www.cined.com/is-the-first-generative-ai-camera-on-the-horizon/#respond Sun, 07 Jul 2024 13:12:33 +0000 https://www.cined.com/?p=344760 The CMR M-1 is a new concept camera. Not yet set for mass marketing, it also requires a somewhat inclusive take on the fundamental idea of a camera. The device is co-developed by SpecialGuestX, an interesting creative technology agency, and 1stAveMachine, a global mixed-media production company. As you’ve probably noticed, no traditional camera manufacturers are on board. Quite a hint regarding the product. So what is the CMR M-1?

The rather simple, minimalistic design of the CMR M-1 Generative AI camera covers a rather complex contraption. The camera uses a FLIR sensor (Front Looking Infra Red) of an undisclosed size (but considering the showcased lenses, I’d expect it to be no larger than Micro Four Thirds). The choice of an IR-capturing sensor is interesting, though no related features are showcased as of now. The CMR M-1 is a massive, box-shaped camera whose design harkens back to 16mm film cameras. This is true for much more than the physical design.

The first Generative AI camera. Image credit: SpecialGuestX

On board generative AI

The AI side of the CMR-M1 is based on Stable Diffusion algorithms, with five different LoRAs available. LoRA, Low-Ranking Adaptation, is a method of fine-tuning Stable Diffusion checkpoints. Without diving too deep, it resembles an AI LUT or film simulation regarding its effect on the workflow and end result.
The LoRAs are loaded onto minimalistic-designed cards. These have a dedicated slot in the camera. Once the LoRA is loaded, the intensity of the generative AI effect is tuned via a large silver-black dial on the right-hand side. Though the CMR-M1 is larger than some powerful desktop computers, generative AI is done via external servers. This brings some challenging questions regarding connectivity that are yet to be answered.

The Camera part of the “Generative AI Camera”

The specs of the CMR M-1 are quite poor compared with the current roster of cameras. Resolution maxes out at an odd 1368×768 with a whopping 12fps frame rate. Yeah, that’s twelve frames per second. No additional info regarding ISO, dynamic range, or color depth. By now, many of our readers might be wondering why we are discussing this seemingly underwhelming camera. Some may speculate that we’re reporting on this simply because AI is currently trending, and who wouldn’t want to capitalize on some juicy SEO traffic? However, I’ll argue that the interesting part about this clickbait-titled “First Generative AI Camera” lies not in its capturing specs but in how it challenges the concept of the camera itself.

What makes a camera?

A camera is quite simple to define. It’s a light-capturing device, able to fix light onto readable media. A camera requires no more than three basic elements:

  • A light-focusing apparatus, be it an optical lens or a pinhole
  • A light-sensitive surface with the ability to fix the captured image
  • A chamber connecting these two in utter darkness. This one gave cameras their name, short for “Camera Obscura.”

The CMR-M1 includes all three; hence, it is considered to be a camera. But it adds something we haven’t seen yet – a generative AI algorithm. Or did we? Aren’t noise reduction algorithms somewhat AI-based? Isn’t automatic white balance based on machine learning? And what about modern auto exposure? What should we call modern autofocus algorithms? Though there are some differences between the level of generative AI offered by the CMR-M1 and these examples, there are also some significant similarities.

Pana-AI-Main-copy
Image credit: Panasonic HD, Panasonic UK

SOOC redefined

SOOC, Straight Out Of Camera, is a term used to describe images untouched by software. Used as a declaration of authenticity in an age of Photoshop and Instagram filters, it holds very little truth. Every single ray of light captured by digital or analog media is subjected to rather heavy interpretation before it transforms into a viewable image. Let’s take the common Bayer array as an example.

Each photosite captures either a red, green, or blue channel. The other two values required for the RGB output are calculated, following complex and secretive algorithms every company fiercely guards. Now, let’s add color profiles, another layer of interpretation (or manipulation) made by the camera. Analog workflow is no different in that regard. Our choice of film stock is similar to every digital filter. The single difference lies with timing. We can only choose the film stock before shooting, while digital workflow allows for much more potent post-production. Taking that trait into consideration, I’ll argue that the CMR-M1’s workflow has more in common with analog filmmaking than with its digital counterpart.

A Kodak Camera advertisement appeared in the first issue of The Photographic Herald and Amateur Sportsman in November 1889. Artist unknown

A glimpse into the past

“You press the button, we’ll do the rest.” Kodaks’s legendary slogan stood as a major inspiration in the design process of the CMR-M1. In a way, I think the companies who created this concept product managed to recreate this experience to a level you (and I) wouldn’t expect from any device, let alone “The first Generative AI camera.” The raw unpredictability of real-time generative AI may just offer a new level of shock and awe, especially when compared with modern cameras able to capture technically superb footage with little to no skills or know-how on the photographer’s part. It’s not just the vintage design and the tactile card-based LoRAs that provide the gist Kodak intended. It’s something deeper, fundamental, the unique core of the CMR-M1.

A glimpse into the future

As Aaron Duffy, founder and executive creative director of SpecialGuestX, puts it: “Sometimes, to imagine what the future might be like, you have to prototype it.” This quote encapsulates the CMR-M1. It’s a prototype. It provides a glimpse into features that may trickle down into future cameras. While I don’t see this level of generative AI coming to professional cameras in the foreseeable future, I do think some versions of these abilities may find their way into our field. Imagine a camera that can generate a resolution higher than the captured resolution. This will enable more affordable sensors, faster capture speeds, etc. Imagine a camera that can manipulate lighting and requires much less on-location setup. It may sound like sci-fi now, but wasn’t everything we have today sci-fi a couple of decades ago?

The CMR-M1 is only a prototype. No mass market planned, no “brochure features,” no cost cuts. We won’t get our hands on it, yet this camera may give us a glimpse into what cameras may become and what they already are.

]]>
https://www.cined.com/is-the-first-generative-ai-camera-on-the-horizon/feed/ 0
DSC Laboratories Ceases Operations – Farewell to Their Test Charts https://www.cined.com/dsc-laboratories-ceases-operations-farewell-to-their-test-charts/ https://www.cined.com/dsc-laboratories-ceases-operations-farewell-to-their-test-charts/#comments Wed, 03 Jul 2024 15:58:03 +0000 https://www.cined.com/?p=346316 In an open letter published on their website, Susan Corley of DSC Laboratories has announced that the company will soon cease its operations. You have until October to order one of their award-winning camera test charts, and these will continue to be produced and shipped until spring 2025.

Like with most success stories, DSC Labs’ adventure started in a basement back in 1962. Founded by David and Susan Corley, over the course of the following 6 decades, the company grew to become a leading manufacturer of high-quality test charts for the film and television industry. These include precise tools for assessing the performance of a camera setup in terms of dynamic range, resolution, color, and much more.

If you’re an avid CineD reader, then you can imagine how sad of a day this is. Indeed, DSC Labs’ Xyla 21 High Dynamic Range test chart stands at the foundation of our rigorous Lab Test procedure – a workflow that we developed with a scientific approach to offer our beloved readers a benchmark of their camera’s performance.

DSC Labs Xyla 21 High Dynamic Range test chart
DSC Labs Xyla 21 High Dynamic Range test chart. Image credit: DSC Labs

As mentioned in the letter below, DSC Labs will accept orders until October 2024, will keep producing their test charts until next spring, and will eventually stop shipping products after April 2025. So you have one last chance to stock up.

Before you go on reading, we would like to take some time to thank David and Susan for their painstaking effort in engineering these test tools over the last 60 years. If we, as filmmakers, can make the best out of our gear, we also owe it to you!

Greetings from DSC Laboratories…

A BIG thank you, to each and every one of you who has used our test materials over the past years.  After more than six decades of providing “better images through research”, David and I are winding up the operations of DSC, and would like to provide you, our loyal customers, with a final opportunity to stock up on the award-winning DSC products that you have come to know and trust.

Our testing has found that DSC test charts, when stored in cool, dark conditions, have very good stability.  To facilitate the storage of DSC test charts, and then the measurement of their time-in-service, we have removed the printed replacement dates and have included a 24-month time strip, to be activated when a fresh test chart is put into use.

To wind up our operations in an orderly manner, we plan to accept orders for DSC products, for the next few months (until the end of October), and then to produce and deliver these products until the spring of 2025.  As supplies for the manufacture of certain products are limited, we would ask that you please send in your orders as soon as reasonably possible.  We will do our best to fill all orders, but may suggest a similar product (if the one you have ordered is unavailable), or in some cases we may be unable to provide it.  Recognizing that you may be ordering products for use in the future, please let us know if you would like to delay the shipment of them to you for a brief period – (but to no later than April 2025).

We are currently speaking with our dealers about them possibly acquiring a substantial inventory of DSC products to be sold in the coming years, and will let you know if we are successful in concluding such an arrangement or arrangements.

Thank you again for your support and for your suggestions over the years. Our products have been developed in collaboration with this community, and our lives have been enriched by our meetings and conversations with all of you.

Please let me know if you have any questions.

Susan Corley, President at DSC Laboratories

What do you think of DSC Laboratories shutting down? What do you think led to this decision? Let us know your thoughts in the comment section below.

]]>
https://www.cined.com/dsc-laboratories-ceases-operations-farewell-to-their-test-charts/feed/ 7
Luma AI’s Dream Machine – New AI Video Generator Launched and Available to the Public https://www.cined.com/luma-ais-dream-machine-new-ai-video-generator-launched-and-available-to-the-public/ https://www.cined.com/luma-ais-dream-machine-new-ai-video-generator-launched-and-available-to-the-public/#comments Tue, 25 Jun 2024 09:49:08 +0000 https://www.cined.com/?p=344920 Since the tremendous buzz surrounding OpenAI’s Sora, no month goes by without an announcement of a new AI video generator. This time around, we’re looking at Luma AI’s Dream Machine. According to the product’s page, their freshly launched model makes high-quality, realistic videos from text and does it fast. What’s more exciting about this generator, though, is that anyone can try it out now and for free. Let’s give it a go, shall we?

It’s not the first time we’ve written about Luma AI. I am a big fan of their automated 3D scans, which users can make out of simple smartphone videos. In my opinion, this feature is particularly useful for location scouting (you can watch the entire workflow explained in this video post). Developers even call themselves “The 3D AI Company”, so it was rather unexpected to see them join the video generation race. But again, maybe they could transfer their knowledge and tons of scanned footage into a working model. You never know until you try.

What Luma AI’s Dream Machine promises

In the description, Luma AI presents Dream Machine as a high-quality text-to-video (and image-to-video) model that is capable of generating physically accurate, consistent, and eventful shots. They also praise its incredible speed: The neural network can allegedly generate 120 frames in 120 seconds (spoiler: my tests showed that’s not always the case because some generations took up to 7 minutes). Another highlighted advantage of this tool is its consistency:

Dream Machine understands how people, animals and objects interact with the physical world. This allows you to create videos with great character consistency and accurate physics.

From the model description on Luma AI’s webpage

Just a side note: Most AI video generators available on the market struggle with consistency and accurate physics, as we demonstrated during some thorough tests.

At the moment, Dream Machine generates 5-second long shots (with the possibility to extend them) and is said to understand and recreate camera motions, both cinematic and naturalistic.

Testing the language understanding

When you head over to Luma AI’s website and log in, the Dream Machine launches automatically. It has a simple interface that consists of a text field and an icon for an image upload (we will take a closer look at it below).

For the sake of fair comparison, the first prompt I fed to the model was the same one that I used in my previous AI video generator tests. I made a few adjustments, though, adding the description of the camera motion and how the character should act. After several minutes, the neural network spat out the subsequent result.

A black-haired woman in a red dress stands by the window without motion and looks at the evening snow falling outside, the camera slowly pushes in.

My prompt

As you see, just like its competitors, this video generator had struggles keeping the snow outside the window. (Maybe that’s why the woman looks so sad and confused in the resulting scene). Additionally, although I asked AI to place my character by the window motionless, Dream Machine decided to add some action and drama.

At the same time, the overall understanding of the described scene is amazing. I’ve got everything I asked for: a window, snow, a black-haired woman in a red dress. When the woman turns around, her face and figure do not suffer from dysmorphia. She stays consistent and looks pretty normal. Personally, I haven’t witnessed such consistency in AI video generators so far (excluding Sora and Google’s Veo, as they are not available for public testing). What about you?

Enhanced prompt and prompting tips

The only setting that you can try out so far in Luma AI’s generator is called “enhanced prompt.” After entering your description into the text field, a corresponding checkbox will appear. It is enabled by default, so my previous result already featured this option. According to the developers of Dream Machine, it provides the model with more creative freedom, so you don’t have to elaborate much to get beautiful and realistic results. Your prompts can be short, and the model will fill in the gaps with the best matching details.

If you disable this option, you will need to describe your scene, action, movements, and objects as detailed as possible. Since my previous text request was already elaborate enough, for the second run I used it again and unchecked the “Enhance Prompt” box. Here is the result:

Woah! What happened to my lovely woman? I don’t know about you, but I get chills when I look at this result. The reason is not only the displacement of the character’s left hand but also the way she moves her shoulders and turns her head. I swear, it could be a very appropriate sequence for a witch-hunt horror movie. Apart from that, the model had the same contextual issues, as with the enhanced prompt above.

Image-to-video approach

Like other AI video generators, Luma AI’s Dream Machine allows users to upload an image as their input and provide it with additional text. In that case, developers recommend enabling the “Enhance Prompt” button and describing what motions and actions (both with the camera and your characters) should happen in the scene.

Let’s give it one more try. For this experiment, I asked the image generator Midjourney to create the same dark-haired woman but in the form of a still image. My original prompt was left unchanged, albeit without the camera directions. This is when I realized that text-to-image AI also has problems with windows and weather conditions:

Luma AI Dream Machine - Midjourney pic as an input for video generation

I managed to get a better result with some additional parameters, but for some unknown reason, my character became an anime figure. Doesn’t matter; let’s stick to the first attempt since the rest of the picture was quite good for a test:

What do you think? Although snow falls everywhere, the woman keeps still this time except for a few hair movements. A bigger problem is that the video generator didn’t get the camera motion correct. I tried several times, but for some reason, I always get a boom-up instead of a simple zoom-in. So much for precision.

Current limitations of Luma AI’s Dream Machine

As developers point out themselves, the model is still in the research and beta phase, so it does have some limitations. For example:

  • This AI video generator (as the others already available on the market) can really struggle with the movement of humans or animals. Try generating a running dog, and you will notice it doesn’t move its paws at all.
  • In the current version, Luma AI’s Dream Machine cannot insert or create any coherent and/or meaningful text.
  • Morphing is also an issue and can occur regularly. It means that your objects can change their forms during complicated moves or actions.
  • Current lack of flexibility. You cannot generate clips longer than 5 seconds from the get-go, add negative prompts, or change the aspect ratio. At least for now. Developers state in the FAQ section that they are working on additional controls for the upcoming versions of Dream Machine and are open to feedback on their Discord channel.

Luma AI’s Dream Machine is available for tryouts

All in all, Luma AI’s Dream Machine feels more advanced than other AI video generators I’ve tested so far. The consistency of results is higher, people’s faces look more realistic, and the motion is not so bad either. However, it’s still a far cry from what OpenAI’s Sora promises and showcases. But as long as we can’t put our hands on it, promises stay promises.

You can try out Dream Machine here. Currently, users get 5 free generations per day. There are also paid plans that will get you watermark-free downloads, commercial rights, and 30 free + 120 paid generations.

What are your first impressions of Luma AI’s Dream Machine? Have you tried it already? We’re aware there is a huge discussion on AI video generators in our industry. What is your take on it? Let’s talk in the comments below, and please, stay kind and respectful to each other.

Feature image source: Luma AI

]]>
https://www.cined.com/luma-ais-dream-machine-new-ai-video-generator-launched-and-available-to-the-public/feed/ 2
Tribeca Festival Will Screen “Sora Shorts” – Five Films Generated by AI https://www.cined.com/tribeca-festival-will-screen-sora-shorts-five-films-generated-by-ai/ https://www.cined.com/tribeca-festival-will-screen-sora-shorts-five-films-generated-by-ai/#comments Tue, 04 Jun 2024 09:59:16 +0000 https://www.cined.com/?p=342040 Tribeca announced a “one of a kind” program this year – a special section, “Sora Shorts”, that will showcase only AI-generated films. Five chosen filmmakers got early access to OpenAI’s Sora in order to create dedicated work. This way, one of the world’s biggest film and video festivals embraces the advances of artificial intelligence and offers a platform for discussion. More details are below.

It’s not the first time a film event has included work completed (full or partially) by AI. We even wrote about an entire AI Film Festival, organized annually by Runway. However, this is indeed a screen debut for Sora and also an unprecedented move for such a major festival. To host this section, Tribeca teamed up with OpenAI, one of the biggest AI-developing companies at the moment.

What is Sora?

OpenAI’s Sora is an AI text-to-video generator capable of creating clips of up to one minute based solely on users’ text descriptions. The first demonstration of this tool provided a lot of buzz and heavy discussion among both industry professionals and amateurs. The level of consistency and photorealism ignited excitement and fear at the same time. Since then, different issues have been coming along, including the ethical question about what videos Sora was trained on.

Sora is still in closed Beta and not available to the general public. Occasionally, OpenAI showcases some of the curated results created by chosen filmmakers and creatives who had early access.

Sora Shorts: what is it about?

Together with OpenAI, Tribeca commissioned five filmmakers for their special program, “Sora Shorts,” and granted them access to OpenAI’s video generator. Participants include Bonnie Discepolo, Ellie Foumbi, Reza Sixo Safai, Michaela Ternasky-Holland, and Nikyatu Jusu, who won the Sundance Grand Jury Prize with her debut horror feature “Nanny” in 2022. (You can read more about each of the creators for Sora Shorts here).

A film still from “Nanny” by Nikyatu Jusu, 2022

Sora Shorts will take place on June 15. After the 20-minute screening, filmmakers will participate in a panel discussion alongside Brad Lightcap, the COO of OpenAI.

The idea behind it

New Tribeca’s festival program is said to be driven by the spirit of exploration. Thus, chosen filmmakers only got a few weeks to come up with their AI films.

Tribeca is rooted in the foundational belief that storytelling inspires change. Humans need stories to thrive and make sense of our wonderful and broken world. Sometimes these stories come to us as a feature film, an immersive experience, a piece of art, or even an AI-generated short film. I can’t wait to see what this group of fiercely creative Tribeca alumni come up with.

Co-founder and CEO of Tribeca Enterprises Jane Rosenthal, a statement to the press

Tribeca is 22 years old and known for constantly developing different formats and divisions. A couple of years ago, for instance, the festival added a dedicated video games category. (That’s why they also dropped the word “Film” from their original name). So, no wonder they will be the first to treat AI-generated movies as a special form of art.

Would you watch Sora Shorts?

However, I’m curious how the festival audience will react. AI video generators have been among the most discussed topics in our articles and reviews this year. (The latest one is dedicated to Google’s Veo, which strives to become Sora’s competitor). The generative tech development puts a lot of industry professionals on the fence about how to go on with their careers. Not to mention the massive backlash to the rather intense speed with which these tools grow and advance, and which results in the lack of regulations.

Tribeca Festival takes place in New York City, June 5-16.

What are your thoughts? Would you go and watch Sora Shorts, created by chosen filmmakers? If you were to attend the panel discussion afterward, what would you ask the creators? Let’s talk in the comments below!

Image source: generated by Midjourney with an integrated still from a Sora-generated clip.

]]>
https://www.cined.com/tribeca-festival-will-screen-sora-shorts-five-films-generated-by-ai/feed/ 1
DJI Drones May Be Banned in the U.S. – Are They a Security Threat or Essential Support? https://www.cined.com/dji-drones-may-be-banned-in-the-u-s-are-they-a-security-threat-or-essential-support/ https://www.cined.com/dji-drones-may-be-banned-in-the-u-s-are-they-a-security-threat-or-essential-support/#comments Mon, 03 Jun 2024 13:31:08 +0000 https://www.cined.com/?p=342009 DJI, a Chinese company that manufactures the most popular drones in the U.S., has come under fire from the U.S. government’s Defense Department. Not only might the U.S. armed forces be prohibited from buying them in the future, according to an article in the New York Times, but the purchase ban is likely to extend to other federal agencies and programs. Will DJI drones really be banned in the U.S. this time?

Despite the fact that drones are avidly used by filmmakers in America to film scenes we never thought possible (for example, this scene in the movie “Ambulance” here or the motorbike chasing scene in “Skyfall” here), their original and primary use was for military surveillance. However, drones today are also used in real-time for search and rescue operations, monitoring wildlife, delivering medical supplies, damage assessment after a tornado or earthquake (and the list goes on) – always able to get to any place humans can’t (easily) reach. Now, the United States has considered the company a security threat, and DJI drones could be banned.

Swimmers rescued by drone. Source: National Geographic

So what’s the problem?

“DJI presents an unacceptable national security risk, and it is past time that drones made by Communist China are removed from America….Any attempt to claim otherwise is a direct result of DJI’s lobbying efforts.” 

Representative Elise Stefanik, Republican of New York (a sponsor of the Countering CCP Drones Act)

Ms. Stefanek added that “government agencies have shown that DJI drones are providing data on “critical infrastructure” in the United States to the Chinese Communist Party,” but she did not elaborate on her statement.

DJI’s new storefront in Manhattan, New York City. Source: DJI

The Countering CCP Drones Act

The House Energy and Commerce Committee passed the bill, called the Countering CCP Drones Act, unanimously last month, and it could go to the House for a vote within the next couple of months. If it passes, DJI drones would be classified as communications equipment that “pose a national security risk” and added to the FCC’s list under the Secure and Trusted Communications Networks Act of 2019. This means they could not operate on U.S. networks.

DJI has run into trouble before. In one instance in 2020, the US Department of Commerce prohibited US-based companies from exporting technology to DJI. One of the questions raised now is whether the ban would be restricted to future DJI drone purchases or include those already in use.

One reason for the proposed DJI ban is the push for the U.S. to further develop its own drone industry since DJI dominates the market – something it has yet to do, at least in comparison to what DJI offers, despite the popularity of drones. Meanwhile, Chinese companies like TikTok and DJI are proving very popular in the United States. DJI held 58% of the drone market in 2022, and it’s likely even more now, given their high-quality products. DJI drones are, indeed, very good. We’ve reviewed the DJI Avata 2 here and the Mini 4 Pro here.

DJI drones are used for everything from search and rescue to documentary filmmaking. Source: DJI

What are the chances?

DJI is fighting back aggressively. Last year, they spent $1.6 million on lobbying Congress, according to the Times, with one argument being the lack of a U.S. company that can compete either in price or quality. They’ve also created a website called the Drone Advocacy Alliance, which is worth visiting. It offers information on how drones benefit various aspects of society and provides an option to contact Congress. Whether the benefits outweigh the (kind of vague) security concerns is beyond my knowledge, but it’s interesting to see how these little flying robots have become a part of our lives.

There are many pros and cons to the DJI drone issue – many of them valid. However, judging by the amount of use the DJI drones perform in the public safety sector alone, an alternative might be to phase down its use giving U.S. companies time to develop reasonable competition. As for filmmakers, we are a resilient and inventive group, and when challenged, we will continue to find new and creative solutions. At least, that is my hope.

What is your opinion? Should DJI drones be banned? Do you agree that they are too much of a security risk? Let us know in the comments below.

]]>
https://www.cined.com/dji-drones-may-be-banned-in-the-u-s-are-they-a-security-threat-or-essential-support/feed/ 25
Sony Pictures to Leverage AI in Film Production to Cut Costs https://www.cined.com/sony-pictures-to-leverage-ai-in-film-production-to-cut-costs/ https://www.cined.com/sony-pictures-to-leverage-ai-in-film-production-to-cut-costs/#comments Mon, 03 Jun 2024 10:23:16 +0000 https://www.cined.com/?p=341830 We all knew this was coming. For years, the general public has been trying to use generative AI to their creative advantage, while the business world has been scrambling to find a way to monetize it. At an investor’s meeting on Thursday, May 30th, the CEO of Sony Pictures Entertainment, Tony Vinciquerra, stated they plan to use AI as an “efficient” way to produce films for television and theaters. Let’s take a look at what this could mean for the filmmaking industry.

The first question during a Q&A portion of the Sony investor call was about AI. You can view the entire presentation on Sony’s Investor Relations website, and the Q&A portion starts at 2:53:36 with CEO Tony Vinciquerra and CFO Philip Rowley. “Industry-wide box office overall has improved from the pandemic but still has not fully rebounded to pre-pandemic levels,” Vinciquerra says in his presentation. So clearly, Sony (as well as other studios) will be grasping at straws in an attempt to make cheaper movies and maximize profits.

Sony Pictures Entertainment calendar year earnings
Total Theatrical Market in calendar years. Source: Sony Investor Presentation.

While no concrete goals or strategies are laid out, it was made clear that AI will be used, going forward, in their approach to making movies. Vinciquerra ends the question by hammering the point, “We will be looking at ways to use AI to produce both films for theaters and television in a more efficient way using AI primarily.” While AI can do a great many things, it can not (as of yet) make movies. So, let’s see what he may have meant and ways they can use AI to produce more efficient films.

“Written” by AI

The “G” in ChatGPT stands for “Generative.” It (and other similar models) can “generate” material based on data on which it has been trained. So in theory, one could use a text-based generative AI model to come up with movie ideas based on prompts by a user. It could even generate a script because it knows the proper formatting based on data scraped from the internet.

One of many concerns over the use of AI, and something stipulated in the most recent contract ratified by the WGA, is that studios “cannot use AI to write scripts or to edit scripts that a writer has already written.” It was decided that AI was a tool to be used by writers, not as an entity to replace writers. An important distinction.

woman sitting at laptop computer with tablet
A human writer. Source: Ketut Subiyanto via Pexels.

This is something specifically referred to by Vinciquerra in his presentation. “[Creatives] are very sensitive; in fact, we had an 8-month strike over AI last year, both actors and writers. That was one of the primary drivers of that strike.” He then goes on to say that the agreements from those SAG/WGA contracts, as well as an upcoming negotiation with IATSE, will define what the studios will be able to do with AI.

So, while a Development Executive or Writer can use AI to assist in the writing of a story or script, it can not be used to write an entire script or rewrite something a writer has already written. But how does using AI to help write make for a more efficient film? What is the benefit for the studio to use AI in producing films in a more efficient way?

“Executive Produced” by AI

Another aspect of Generative AI that may be useful to a production studio is charting successes and failures. Being able to predict successes would be a game-changer for movie studios. Movie studios are, in fact, a business.

For years, Executives have been chasing what they think audiences want to see. What they think will sell. But now they may have a calculator that can accurately predict (based on available data) what will bring audiences to theaters. What audiences will or will not respond to is something that can be tracked. Doing that monotonous, repetitive research is something that AI excels at and can be done in a fraction of the time it takes a human to do it.

Futurama S10e01
Executive Robots on Futurama. Source: Hulu.

So, is it a more efficient way of making films to pay Executives less and rely on AI more? It certainly would appear that way, but I don’t think I could get any human Executive to agree with me.

“Art Direction” by AI

There has already been controversy around using AI to produce posters and artwork for movies and TV shows. It is one thing generative AI has gotten quite good at. Some pieces of AI-generated art are virtually imperceptible as non-physically photographed or created by a human artist. Posters for Alex Garland’s ‘Civil War’ and Marvel’s ‘Loki’ Season 2 were, at least in part, created using AI.

Loki Season 2 Poster. Source: Marvel.

Perhaps a more efficient way of making films is to cut back on paying artists by using AI to generate promotional materials for movies and series (of course, I am not actually suggesting this, nor am I an advocate of AI taking the jobs of humans; I’m merely speaking rhetorically). Or maybe, with the advent of Stagecraft technology and the LED Volume, you could use AI-generated backgrounds and landscapes in live-action photography. Anything normally created by an Art Department, whether it is Concept Art, Production Design, Props, Costumes, etc., could all theoretically be generated by AI.

There are image-to-3D model conversions that can be done with AI. So, modelers and VFX artists could even be lumped into this category. But how is that efficient for a movie studio? You still need human interaction and human emotions evoked by creating artwork. And that is something AI will never be able to replicate, no matter how good it gets. It is unique to the human experience and can not be explained, scraped from the internet, or otherwise recreated.

“Visual Effects Created” by AI

Video generation is one of the newer revolutions in generative AI within the past year or so. Open AI’s Sora model made waves by showcasing photo-realistic videos created by a singular text prompt. There has been some pushback on a recently released short film called “Air Head,” which was touted as being created by Sora’s AI. The insinuation that the video was made solely by Sora was soon after debunked when VFX artist Patrick Cederberg revealed that quite a bit of post-production work was done to make the Sora-generated video clips work in the short.

Stills from videos generated by Sora. Source: OpenAI.

But Sora is not the only video generation model out there. Google has recently announced Veo, using their Deepmind AI to generate videos from a text prompt. Even Adobe Premiere, an industry-leading video editing software, announced features allowing the user to extend video clips beyond recorded motion using AI or erase unwanted elements from video clips. All of which before seemed impossible without hours of work from a VFX artist.

“Music Composed” by AI

While video generation is a big step forward for AI, the most recent shocking innovation has been AI music generation. If the picture and video AI revolution is a speeding car, AI-generated music is a rocket ship. I remember sampling Meta’s Musicgen a few months ago and getting something that resembled a non-melodic score for an 8-bit video game. But now you can create entire songs, complete with vocal tracks, at the push of a button.

CineD’s own Mascha Deikova, inspired by CineD’s co-creator Nino Leitner’s AI-generated tune, just published an article about the AI music generator revolution and how impressive it is. I would urge you to read her in-depth article outlining the pros, cons, and various models that can generate music based on a single text prompt.

End Credits

While all of this is fun to research and talk about, I don’t think any creatives working in the industry are really concerned about these AI models taking any jobs away from creative individuals. Sure, actors need to safeguard themselves against unauthorized use of AI generating their likenesses, and writers need to make sure Artificial Intelligence can’t legally “write a movie screenplay.” But when it comes down to it, all of these generative models have the same thing in common: you need a user to input the idea to begin with. You need an individual with years of experience, talent, and creativity to imagine these things before the AI model can do anything at all.

I don’t think it’s cynical of me to say that the business side of filmmaking will always try to use the latest tech to cut costs, drive up profits, and ensure their bonuses at all costs. And that’s all this latest Sony Investor’s call really was; Tony Vinciquerra using buzzwords to assure shareholders that they can make movies “more efficiently” using the latest technology. Whatever that means.

A woman sitting at a laptop. Source: Freepik

Finding work in Hollywood in the current climate is harder than ever. But it’s not because AI is taking jobs away from creatives. It’s because the Hollywood business model has become averse to taking risks. Studios are clinging to franchises and established IPs, wringing every cent they can get out of them. Gone are the days of the Spec Script Boom of the 90’s. Gone are the days of the indie arthouse movies of the 2000s. Even the leader of the modern indie film revolution, A24, has started to expand their catalog of movies into the more commercially blockbuster type of films.

Anyone working in movies today will tell you the landscape looks vastly different than it did 5 years ago. And with recent big-budget movies like “The Fall Guy” and “Furiosa” underperforming at the box office, studios are looking for assurances that they will always be able to make money from their films.

Did I miss anything in the world of AI generative models that could be applied to filmmaking? What do you think about Sony Pictures using AI to create films? And maybe more importantly, what do you think the film landscape will look like another 5 years from now? I’m interested to hear your thoughts.

]]>
https://www.cined.com/sony-pictures-to-leverage-ai-in-film-production-to-cut-costs/feed/ 8
Sora Required Help in Post-Production – Does it Even Matter? https://www.cined.com/sora-required-help-in-post-production-does-it-even-matter/ https://www.cined.com/sora-required-help-in-post-production-does-it-even-matter/#comments Mon, 03 Jun 2024 07:35:10 +0000 https://www.cined.com/?p=339637 We’ve recently learned that the rather viral “Air Head” short video (by ShyKids) made by Open AI’s Sora video generator wasn’t purely generated. The final version still required some conventional editing to compensate for some of the generative AI shortcomings. This revelation, a slight impotence of the all-powerful AI, generated quite a stir among tech-savvy crowds as well as filmmakers. But does this current shortcoming demonstrated here really change the tides of the generative AI progress vector?

When we were young and naive, we thought artificial intelligence would replace human labor in all the hard, physically demanding, or extremely boring jobs. No longer will we mine coal, lift heavy loads, drive across fields to harvest wheat, or do the dishes. We can focus on reading, writing poems, and being creative with our newfound recreational time. Then came generative AI with the opposite vision – you keep doing the dishes, and we’ll take over creativity. I’m not sure we’ve signed up for this. I’m not sure we were asked to sign.

Sora and other recent AI-based text-to-video generators have been transforming our industry for a while now. The ability to produce ever-improving footage without any dedicated gear (and, in some cases – with no cinematic knowledge) is both exciting and terrifying. But recently, some caveats emerged.

“Air Head” by ShyKids was one of the first clips created by independent creators using early access to Open AI’s Sora. While the creators are independent, the terms of creating the video haven’t been disclosed. While the video’s headline says it was made “with Sora,” it seems like most viewers believed the video wasn’t edited or manipulated with other tools.

Every magic trick has its secrets

Not too long after “Air Head” aired (and went quite viral), additional details began to unravel. In this extended interview, ShyKids’ post-production specialist, Patrick Cederberg, dives deep into the creative process behind the clip. If you’re interested in the ins and outs of generative AI and the workflow surrounding it, I truly recommend reading the entire thing, but the BTS video sums it up nicely.

So yes. Traditional post-production practices, techniques, and effects are used on the AI-generated video. Some perceive this as a victory for the hard-core, traditional editors and post-productionists. The machine can’t replace true human creativity! And there’s truth there, but does it even matter?

The promise of generative AI

As per most generative AI companies, the promises seem to be quite optimistic. You type the prompts, and we’ll do the rest. While this might be a nice sales pitch, it will rarely work as advertised. At least for now. The caveat mentioned here doesn’t look good regarding their campaign, but it doesn’t matter to the creative visual industry. While it would be nice (or horrific) to be able to type a line and get a feature film, we shouldn’t underestimate the current state of generative AI. Sora and other video generators are at their alpha stage (or even pre-alpha) and can already produce trustworthy content. One may be able to spot inconsistencies and outright oddities from time to time, but mostly when specifically looking for it.

Video-generating tools

CineD and other websites are filled with reviews about game-changing cameras. Revolutionary hybrid cameras changed filmmaking forever. The shift towards large-sensor cine cameras also shifted the field, and even tiny action cameras made an impact. Don’t get me started about Smartphones. None of us expect these video-generating tools to operate without the help of proper editing, grading, or VFX. So why do we expect this kind of full circle from Sora and other video generators?

Sora samples. Credit: Open AI, though it’s complicated…

Eyes on the ball

Don’t let this minor, mostly promotional failure distract you from what’s going on. Even with some necessary post-production, even with frustrating inconsistencies, AI video generators may save your next video. Think about what it can do. The way Adobe Firefly changes Premiere with subtle details may prove extremely influential. Think about the amount of time (and budget) a generated shot can save you, depicting vast landscapes, slow-motion explosions, or just deleting a pesky, unwanted detail.

I honestly don’t know if we’ll ever get to a point when AI can do it all. And if we do, it will still be a long way before that product becomes interesting, funny, surprising, or emotional. I do, however, think that current tools already offer revolutionary features and capabilities that may help some of us while, unfortunately, disadvantaging others.

Are you excited about recent AI progress? Frightened? Enraged? Let us know in the comments.

]]>
https://www.cined.com/sora-required-help-in-post-production-does-it-even-matter/feed/ 3