<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>Media &amp; Entertainment</title><link>https://cloud.google.com/blog/products/media-entertainment/</link><description>Media &amp; Entertainment</description><atom:link href="https://cloudblog.withgoogle.com/blog/products/media-entertainment/rss/" rel="self"></atom:link><language>en</language><lastBuildDate>Tue, 11 Nov 2025 17:00:03 +0000</lastBuildDate><item><title>How Lightricks trains video diffusion models at scale with JAX on TPU</title><link>https://cloud.google.com/blog/products/media-entertainment/how-lightricks-trains-video-diffusion-models-at-scale-with-jax-on-tpu/</link><description>&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;Training large video diffusion models at scale isn't just computationally expensive — it can become impossible when your framework can't keep pace with your ambitions. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="http://jax.dev" rel="noopener" target="_blank"&gt;&lt;span style="font-style: italic; text-decoration: underline; vertical-align: baseline;"&gt;JAX&lt;/span&gt;&lt;/a&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt; has become a popular computational framework across AI applications, now recognized for its capabilities in training large-scale AI models, such as LLMs and &lt;/span&gt;&lt;a href="https://cloud.google.com/blog/topics/customers/escalante-uses-jax-on-tpus-for-ai-driven-protein-design"&gt;&lt;span style="font-style: italic; text-decoration: underline; vertical-align: baseline;"&gt;life sciences models&lt;/span&gt;&lt;/a&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;. Its strength lies not just in performance but in an expressive, scalable design that gives innovators the tools to push the boundaries of what's possible. We're consistently inspired by how researchers and engineers leverage JAX's ecosystem to solve unique, domain-specific challenges — including applications for generative media.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;Today, we're excited to share the story of &lt;/span&gt;&lt;a href="https://www.lightricks.com/" rel="noopener" target="_blank"&gt;&lt;span style="font-style: italic; text-decoration: underline; vertical-align: baseline;"&gt;Lightricks&lt;/span&gt;&lt;/a&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;, a company at the forefront of the creator economy. Their &lt;/span&gt;&lt;a href="https://ltx.studio/blog/ltx-2-the-complete-ai-creative-engine-for-video-production" rel="noopener" target="_blank"&gt;&lt;span style="font-style: italic; text-decoration: underline; vertical-align: baseline;"&gt;LTX-Video&lt;/span&gt;&lt;/a&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt; team is building high-performance video generation models, and their journey is a masterclass in overcoming technical hurdles. I recently spoke with Yoav HaCohen and Yaki Bitterman, who lead the video and scaling teams, respectively. They shared their experience of hitting a hard scaling wall with their previous framework and how a strategic migration to JAX became the key to unlocking the performance they needed.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;Here, Yoav and Yaki tell their story in their own words. – &lt;/span&gt;&lt;strong style="font-style: italic; vertical-align: baseline;"&gt;Srikanth Kilaru&lt;/strong&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;, Senior Product Manager, Google ML Frameworks&lt;/span&gt;&lt;/p&gt;
&lt;hr/&gt;
&lt;h3&gt;&lt;strong style="vertical-align: baseline;"&gt;The creator's challenge&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;At Lightricks, our goal has always been to bring advanced creative technology to consumers. With apps like &lt;/span&gt;&lt;a href="https://www.facetuneapp.com/?srsltid=AfmBOoo8ZXXKPBsz1wyL8Rvq9ZtL65N9K51p_yyRjM1DoH6EqZ1oEkLQ" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Facetune&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, we saw the power of putting sophisticated editing tools directly into people's hands. When generative AI emerged, we knew it would fundamentally change content creation.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;We launched &lt;/span&gt;&lt;a href="https://ltx.studio/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;LTX Studio&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; to build generative video tools that truly serve the creative process. Many existing models felt like a "prompt and pray" experience, offering little control and long rendering times that stifled creativity. We needed to build our own models—ones that were not only efficient but also gave creators the controllability they deserve.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Our initial success came from training our first real-time video generation model on &lt;/span&gt;&lt;a href="https://cloud.google.com/tpu"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Google Cloud TPUs &lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;with &lt;/span&gt;&lt;a href="https://docs.pytorch.org/xla/release/r2.8/index.html" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;PyTorch/XLA&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;. But as our ambitions grew, so did the complexity. When we started developing our &lt;/span&gt;&lt;a href="https://www.prnewswire.com/news-releases/lightricks-launches-13b-parameters-ltx-video-model-breakthrough-rendering-approach-generates-high-quality-efficient-ai-video-30x-faster-than-comparable-models-302447660.html#:~:text=LTXV%2D13B%20introduces%20%22multiscale%20rendering,LTX%20Video%20in%20the%20marketplace." rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;13-billion-parameter model&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, we hit a wall.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong style="vertical-align: baseline;"&gt;Hitting the wall and making the switch&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Our existing stack wasn’t delivering the training step times and scalability we needed. After exploring optimization options, we decided to shift our approach. We paused development to rewrite our entire training codebase in JAX, and the results were immediate. Switching to JAX felt like a magic trick, instantly providing the necessary runtimes.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;This transition enabled us to effectively scale our tokens per sample (the amount of data processed in each training step), model parameters, and chip count. With JAX, sharding strategies (sharding divides large models across multiple chips) that previously failed now work out of the box on both small and large pods (clusters of TPU chips).&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;These changes delivered linear scaling that translates to 40% more training steps per day — directly accelerating model development and time to market. Critical issues with FlashAttention and data loading also worked reliably. As a result, our team's productivity skyrocketed, doubling the number of pull requests we could merge in a week.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong style="vertical-align: baseline;"&gt;Why JAX worked: A complete ecosystem for scale&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The success wasn't just about raw speed; it was about the entire &lt;/span&gt;&lt;a href="https://docs.jax.dev/en/latest/index.html#ecosystem" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;JAX stack&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, which provided the building blocks for scalable and efficient research.&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;A clear performance target with MaxText:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; We used the open-source &lt;/span&gt;&lt;a href="https://github.com/AI-Hypercomputer/maxtext" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;MaxText &lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;framework as a baseline to understand what acceptable performance looked like for a large model on TPUs. This gave us a clear destination and the confidence that our performance goals were achievable on the platform.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;A robust toolset:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; We built our new stack on the core components of the JAX ecosystem based on the MaxText blueprint. We used &lt;/span&gt;&lt;a href="https://flax.readthedocs.io/en/v0.8.3/index.html" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Flax&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; for defining our models, &lt;/span&gt;&lt;a href="https://optax.readthedocs.io/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Optax&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; for implementing optimizers, and &lt;/span&gt;&lt;a href="https://orbax.readthedocs.io/en/latest/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Orbax&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; for robust checkpointing — all core components that work together natively.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Productive development and testing:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; The transition was remarkably smooth. We implemented unit tests to compare our new JAX implementation with the old one, ensuring correctness every step of the way. A huge productivity win was discovering that we could test our &lt;/span&gt;&lt;a href="https://docs.jax.dev/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;sharding&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; logic on a single, cheap CPU before deploying to a large TPU slice. This allowed for rapid, cost-effective iteration.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Checkpointing reliability:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; For sharded models, JAX’s checkpointing is much more reliable than before, making training safer and more cost-effective.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Compile speed &amp;amp; memory:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; JAX compilation with &lt;/span&gt;&lt;a href="https://docs.jax.dev/en/latest/_autosummary/jax.lax.fori_loop.html" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;lax.fori_loop&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; is fast and uses less memory, freeing capacity for tokens and gradients.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong style="vertical-align: baseline;"&gt;Smooth scaling on a supercomputer:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; With our new JAX codebase, we were able to effectively train on a reservation of thousands of TPU cores. We chose TPUs because Google provides access to what we see as a "&lt;/span&gt;&lt;a href="https://cloud.google.com/solutions/ai-hypercomputer"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;supercomputer&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;" — a fully integrated system where the &lt;/span&gt;&lt;a href="https://cloud.google.com/tpu/docs/system-architecture-tpu-vm"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;interconnects and networking&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; were designed first, not as an afterthought. We manage these large-scale training jobs with our own custom Python scripts on &lt;/span&gt;&lt;a href="https://cloud.google.com/products/compute"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Google Compute Engine (GCE)&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, giving us direct control over our infrastructure. We also use &lt;/span&gt;&lt;a href="https://cloud.google.com/storage"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Google Cloud Storage&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; and stream the training data to the TPU virtual machines.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/JAX-Stack-Lightricks-Architecture.max-1000x1000.png"
        
          alt="JAX-Stack-Lightricks-Architecture"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="9djnu"&gt;Architectural diagram showing the Lightricks stack&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;hr/&gt;
&lt;h3&gt;&lt;strong style="vertical-align: baseline;"&gt;Build your models with the JAX ecosystem&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Lightricks' story is a great example of how JAX's powerful, modular, and scalable design can help teams overcome critical engineering hurdles. Their ability to quickly pivot, rebuild their stack, and achieve massive performance gains is a testament to both their talented team and the tools at their disposal.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The JAX team at Google is committed to supporting innovators like Lightricks and the entire scientific computing community.&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Share your story&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;: Are you using JAX to tackle a challenging scientific problem? We would love to learn how JAX is accelerating your research.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Help guide our roadmap&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;: Are there new features or capabilities that would unlock your next breakthrough? Your feature requests are essential for guiding the evolution of JAX.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Please reach out to the team via&lt;/span&gt; &lt;a href="https://github.com/google/jax" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;GitHub&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; to share your work or discuss what you need from JAX. Check out documentation, examples, news, events and more at &lt;/span&gt;&lt;a href="http://jaxstack.ai" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;jaxstack.ai&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; and &lt;/span&gt;&lt;a href="http://jax.dev" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;jax.dev&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Sincere thanks to Yoav, Yaki, and the entire Lightricks team for sharing their insightful journey with us. We're excited to see what they create next.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Tue, 11 Nov 2025 17:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/media-entertainment/how-lightricks-trains-video-diffusion-models-at-scale-with-jax-on-tpu/</guid><category>AI &amp; Machine Learning</category><category>Infrastructure Modernization</category><category>Customers</category><category>Media &amp; Entertainment</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>How Lightricks trains video diffusion models at scale with JAX on TPU</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/media-entertainment/how-lightricks-trains-video-diffusion-models-at-scale-with-jax-on-tpu/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Yaki Bitterman</name><title>Research Team Lead, Model Scaling, Lightricks</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Yoav HaCohen, PhD</name><title>Director of Research, Model Foundations, Lightricks</title><department></department><company></company></author></item><item><title>StreamSight: Driving transparency in music royalties with AI-powered forecasting</title><link>https://cloud.google.com/blog/products/media-entertainment/streamsight-driving-transparency-in-music-royalties-with-ai-powered-forecasting/</link><description>&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;In an industry generating vast volumes of streaming data every day, ensuring precision, speed, and transparency in royalty tracking is a constant and evolving priority. For music creators, labels, publishers, and rights holders, even small gaps in data clarity can influence how and when income is distributed — making innovation in data processing and anomaly detection essential.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;To stay ahead of these challenges, BMG partnered with Google Cloud to develop StreamSight, an AI-driven application that enhances digital royalty forecasting and detection of reporting anomalies. The tool uses machine learning models to analyze historical data and flag patterns that help predict future revenue — and catch irregularities that might otherwise go unnoticed.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The collaboration combines Google Cloud’s scalable technology, such as &lt;/span&gt;&lt;a href="https://cloud.google.com/bigquery?e=48754805&amp;amp;hl=en"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;BigQuery&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, &lt;/span&gt;&lt;a href="https://cloud.google.com/vertex-ai"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Vertex AI&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, and &lt;/span&gt;&lt;a href="http://cloud.google.com/looker-studio"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Looker&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, with BMG’s deep industry expertise. Together, they’ve built an application that demonstrates how cloud-based AI can help modernize royalty processing and further BMG’s and Google’s commitment to fairer and faster payout of artist share of label and publisher royalties. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;“At BMG, we’re accelerating our use of AI and other technologies to continually push the boundaries of how we best serve our artists, songwriters, and partners. StreamSight reflects this commitment — setting a new standard for data clarity and confidence in digital reporting and monetization. Our partnership with Google Cloud has played a key role in accelerating our AI and data strategy.”&lt;/span&gt;&lt;strong style="font-style: italic; vertical-align: baseline;"&gt; – &lt;/strong&gt;&lt;strong style="font-style: italic; vertical-align: baseline;"&gt;Sebastian Hentzschel&lt;/strong&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;,&lt;/span&gt;&lt;strong style="font-style: italic; vertical-align: baseline;"&gt; &lt;/strong&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;Chief Operating Officer, BMG&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-aside"&gt;&lt;dl&gt;
    &lt;dt&gt;aside_block&lt;/dt&gt;
    &lt;dd&gt;&amp;lt;ListValue: [StructValue([(&amp;#x27;title&amp;#x27;, &amp;#x27;Try Google Cloud for free&amp;#x27;), (&amp;#x27;body&amp;#x27;, &amp;lt;wagtail.rich_text.RichText object at 0x7f66d989f8b0&amp;gt;), (&amp;#x27;btn_text&amp;#x27;, &amp;#x27;&amp;#x27;), (&amp;#x27;href&amp;#x27;, &amp;#x27;&amp;#x27;), (&amp;#x27;image&amp;#x27;, None)])]&amp;gt;&lt;/dd&gt;
&lt;/dl&gt;&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;h2&gt;&lt;span style="vertical-align: baseline;"&gt;From Data to Insights: How StreamSight Works&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;At its core, StreamSight utilizes several machine learning models within Google BigQuery ML for its analytical power:&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;For Revenue Forecasting:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;ARIMA_PLUS&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;: This model is a primary tool for forecasting revenue patterns. It excels at capturing underlying sales trends over time and is well-suited for identifying and interpreting long-term sales trajectories rather than reacting to short-term volatility.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;BOOSTED_TREE&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;: This model is valuable for the exploratory analysis of past sales behavior. It can effectively capture past patterns, short-term fluctuations and seasonality, helping to understand historical dynamics and how sales responded to recent changes.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;For Anomaly Detection &amp;amp; Exploratory Analysis:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;K-means and ANOMALY_DETECT&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; function: These are highly effective for identifying various anomaly types in datasets, such as sudden spikes, country-based deviations, missing sales periods, or sales reported without corresponding rights.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Together, these models provide a comprehensive approach: ARIMA_PLUS offers robust future trend predictions, while other models contribute to a deeper understanding of past performance and the critical detection of anomalies. This combination supports proactive financial planning and helps safeguard royalty revenues.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong style="vertical-align: baseline;"&gt;Data Flow in Big Query:&lt;/strong&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_QSxk1HD.max-1000x1000.jpg"
        
          alt="1"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;h2&gt;&lt;span style="vertical-align: baseline;"&gt;Finding the Gaps: Smarter Anomaly Detection&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;StreamSight doesn't just forecast earnings — it also flags when things don’t look right. Whether it's a missing sales period; unexpected spikes or dips in specific markets; or mismatches between reported revenue and rights ownership, the system can highlight problems that would normally require hours of manual review. And now it’s done at the click of a button.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;For example:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Missing sales periods&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;: Gaps in data that could mean missing money.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Sales mismatched with rights&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;: Revenue reported from a region where rights aren’t properly registered.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Global irregularities&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;: Sudden increases in streams or sales that suggest a reporting error or unusual promotional impact.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;With StreamSight, these issues are detected at scale, allowing teams to take faster and more consistent action.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong style="vertical-align: baseline;"&gt;The StreamSight Dashboard:&lt;/strong&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/2_388obRz.max-1000x1000.png"
        
          alt="2"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;h2&gt;&lt;span style="vertical-align: baseline;"&gt;Built on Google Cloud for Scale and Simplicity&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The technology behind StreamSight is just as innovative as its mission. Developed on Google Cloud, it uses:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;BigQuery ML&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; to run machine learning models directly on large datasets using SQL.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Vertex AI and Python&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; for advanced analysis and model training.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Looker Studio&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; to create dashboards that make results easy to interpret and share across teams.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;This combination of tools made it possible to move quickly from concept to implementation, while keeping the system scalable and cost-effective.&lt;/span&gt;&lt;/p&gt;
&lt;h2&gt;&lt;span style="vertical-align: baseline;"&gt;A Foundation for the Future&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;While StreamSight is currently a proof of concept, its early success points to vast potential. Future enhancements could include:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Adding data from concert tours and marketing campaigns to refine predictions.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Include more Digital Service Providers (DSPs) that provide access to digital music, such as Amazon, Apple Music or Spotify to allow for better cross-platform comparisons.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Factoring in social media trends or fan engagement as additional inputs.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Segmenting analysis by genre, region, music creator type, or release format.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;By using advanced technology for royalty processing, we're not just solving problems — we're building a more transparent ecosystem for the future, one that supports our shared commitment to the fairer and faster payout of the artist's share of label and publisher royalties.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The collaboration between BMG and Google Cloud demonstrates the music industry’s potential to use advanced technology to create a future where data drives smarter decisions and where everyone involved can benefit from a clearer picture of where music earns its value.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Thu, 04 Sep 2025 13:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/media-entertainment/streamsight-driving-transparency-in-music-royalties-with-ai-powered-forecasting/</guid><category>AI &amp; Machine Learning</category><category>Customers</category><category>Media &amp; Entertainment</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>StreamSight: Driving transparency in music royalties with AI-powered forecasting</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/media-entertainment/streamsight-driving-transparency-in-music-royalties-with-ai-powered-forecasting/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Kiki Ganzemüller</name><title>Top Partner Manager, EMEA Music Partnerships, Google</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Thomas Heigl</name><title>Key Account Director for Media and Entertainment, Germany, Google Cloud</title><department></department><company></company></author></item><item><title>How Jina AI built its 100-billion-token web grounding system with Cloud Run GPUs</title><link>https://cloud.google.com/blog/products/application-development/how-jina-ai-built-its-100-billion-token-web-grounding-system-with-cloud-run-gpus/</link><description>&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;strong style="font-style: italic; vertical-align: baseline;"&gt;Editor’s note:&lt;/strong&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt; &lt;/span&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;The Jina AI Reader is a specialized tool that transforms raw web content from URLs or local files into a clean, structured, and LLM-friendly format.  In this post, Han Xiao details how Cloud Run empowers Jina AI to build a secure, reliable, and massively scalable web scraping system that remains economically viable. This post explores the collaborative innovation, technical hurdles, and breakthrough achievements behind Jina Reader, a web grounding system now processing 100 billion tokens daily.&lt;/span&gt;&lt;/p&gt;
&lt;hr/&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;When Jina Reader launched in April 2024, its explosive growth — serving more than 10 million requests and 100 billion tokens daily — confirmed huge demand for reliable, LLM-friendly web content. Jina Reader isn't just another scraper; it takes a different approach to  how AI systems consume web content by transforming raw, noisy web pages into clean, structured markdown.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The core challenge for any AI system processing web data is the "web grounding problem." Modern websites are a chaotic mix of content, ads, tracking scripts, and dynamic JavaScript, creating an overwhelming noise-to-signal ratio. Traditional scrapers struggle with this complexity, often failing on dynamic single-page applications or generating unusable, ungrounded data for LLMs. &lt;/span&gt;&lt;strong style="vertical-align: baseline;"&gt;Jina Reader’s breakthrough, ReaderLM-v2, is a purpose-built 1.5-billion-parameter language model &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;that intelligently extracts content, trained on millions of documents to understand web structure beyond simple rules.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/Figure_1_Jina_Reader.max-1000x1000.png"
        
          alt="Figure 1 Jina Reader"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="e300m"&gt;FIgure 1: Jina Reader: a sophisticated browser automation system&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Cloud Run: The engine behind Jina Reader's scale&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Jina Reader faced  inherent burstiness and unpredictability of web scraping workloads. Traditional virtual machine setups meant either costly over-provisioning or critical failures under load. Google Cloud Run became the essential solution,&lt;/span&gt;&lt;strong style="vertical-align: baseline;"&gt; enabling Jina Reader to build a web scraping system that is secure, reliable, massively scalable, and economically viable&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;.&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;The web grounding app (the browser automation system that scrapes and cleans web content) is hosted on Cloud Run (CPU). It runs full Chrome browser instances.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;ReaderLM-v2 is a purpose-built 1.5-billion-parameter language model for HTML-to-markdown conversion that runs on Cloud Run with serverless GPUs.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Cloud Run directly addressed several critical issues:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong style="vertical-align: baseline;"&gt;Optimized Performance:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; The deep collaboration between Jina Reader and Google Cloud engineering was essential. We jointly optimized container lifecycle management for browser automation, reducing startup times from over 10 seconds to under two seconds  through prewarming, optimized images, and intelligent resource allocation. For ReaderLM-v2, Google's team helped create custom container configurations to efficiently run a 1.5-billion-parameter model on Cloud Run GPUs. The on-demand scaling and fast start capabilities of Cloud Run GPUs were critical in helping optimize model performance, directly impacting our ability to process 100 billion tokens daily.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/Figure_2_On-demand_AI_inference_with_Cloud.max-1000x1000.png"
        
          alt="Figure 2 On-demand AI inference with Cloud Run GPUs"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="e300m"&gt;Figure 2: On-demand AI inference with Cloud Run GPUs (hosting ReaderLM-v2 model)&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;True Scale-to-Zero Serverless:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; Cloud Run's ability to run full Chrome browser instances allowed cost-effective operations. Each request spawns an isolated container with its own headless Chrome, and crucially, these containers disappear when the request is done. This ephemeral nature is vital for processing untrusted web content, mitigating security risks and memory leaks.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Global Multi-Regional Deployment:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; Cloud Run's global presence ensures requests are processed close to both the users and target websites. This significantly minimizes latency and boosts success rates, even against geo-restricted content.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Massive &amp;amp; Automatic Scaling:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; The platform seamlessly scales from a handful to over 1,000 container instances during peak traffic, handling the unpredictable nature of web scraping without manual intervention.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Economic Viability:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; With Cloud Run's pay-per-use model, Jina Reader can offer a generous free tier to end users while maintaining profitability even with substantial monthly usage. This pricing flexibility was fundamental to our widespread adoption.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Resilience and Operational Excellence:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; During a recent sustained DDoS attack, Cloud Run's serverless architecture proved invaluable. It scaled up to absorb massive loads (over 100,000 requests per minute), while intelligent rate limiting filtered malicious traffic. Critically, costs returned to normal immediately after the attack subsided due to its scale-to-zero capability.  The system has maintained over 99.9% uptime.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Conclusion&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Building &lt;/span&gt;&lt;a href="https://jina.ai/reader/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Jina Reader&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; on Google Cloud Run proved that AI capabilities and cloud-native architecture are complementary. Cloud Run's unique capabilities — serverless GPUs, container isolation, global deployment and scale-to-zero economics — made the architecture possible. Our close partnership demonstrates that deep integration between AI-first systems and modern cloud infrastructure can create capabilities previously thought impossible, enabling us to process 100 billion tokens every day.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;You can discover more about Cloud Run GPUs &lt;/span&gt;&lt;a href="https://cloud.google.com/run/docs/configuring/services/gpu"&gt;&lt;span style="font-style: italic; text-decoration: underline; vertical-align: baseline;"&gt;on our product page&lt;/span&gt;&lt;/a&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;, and if you want to learn how to host a large language model on Cloud Run, &lt;/span&gt;&lt;a href="https://youtu.be/GKIUmb99HQc?si=SFMIAkXEJJkTXHxA" rel="noopener" target="_blank"&gt;&lt;span style="font-style: italic; text-decoration: underline; vertical-align: baseline;"&gt;watch this video&lt;/span&gt;&lt;/a&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-aside"&gt;&lt;dl&gt;
    &lt;dt&gt;aside_block&lt;/dt&gt;
    &lt;dd&gt;&amp;lt;ListValue: [StructValue([(&amp;#x27;title&amp;#x27;, &amp;#x27;Try Google Cloud for free&amp;#x27;), (&amp;#x27;body&amp;#x27;, &amp;lt;wagtail.rich_text.RichText object at 0x7f66db4809a0&amp;gt;), (&amp;#x27;btn_text&amp;#x27;, &amp;#x27;Get started for free&amp;#x27;), (&amp;#x27;href&amp;#x27;, &amp;#x27;https://console.cloud.google.com/freetrial?redirectPath=/welcome&amp;#x27;), (&amp;#x27;image&amp;#x27;, None)])]&amp;gt;&lt;/dd&gt;
&lt;/dl&gt;&lt;/div&gt;</description><pubDate>Fri, 11 Jul 2025 16:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/application-development/how-jina-ai-built-its-100-billion-token-web-grounding-system-with-cloud-run-gpus/</guid><category>AI &amp; Machine Learning</category><category>Media &amp; Entertainment</category><category>Customers</category><category>Application Development</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>How Jina AI built its 100-billion-token web grounding system with Cloud Run GPUs</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/application-development/how-jina-ai-built-its-100-billion-token-web-grounding-system-with-cloud-run-gpus/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Han Xiao</name><title>CEO, Jina AI</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Yunong Xiao</name><title>Director of Engineering, Google Cloud</title><department></department><company></company></author></item><item><title>From news to insights: Glance leverages Google Cloud to build a Gemini-powered Content Knowledge Graph (CKG)</title><link>https://cloud.google.com/blog/topics/customers/glance-builds-gemini-powered-knowledge-graph-with-google-cloud/</link><description>&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;In today's hyperconnected world, delivering personalized content at scale requires more than just aggregating information – it demands deep understanding of context, relationships, and user preferences. Glance, a leading content discovery platform that delivers personalized, real-time content experiences on mobile lock screens across the globe, serves over 300 million users worldwide across 450 million devices. Beyond news aggregation, Glance curates diverse content including entertainment, sports, gaming, shopping, and lifestyle content, making every glance at the phone screen meaningful and engaging.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;However, with the exponential growth of digital content, particularly the overwhelming volume of daily news articles, Glance faced a critical challenge: how to effectively navigate this information while maintaining the personalized, contextual experiences users expect. The existing search and content discovery capabilities needed significant enhancement to uncover emerging trends, improve search relevance, provide deeper context, and most importantly, deliver truly personalized content recommendations that resonate with individual user preferences and behaviors.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Glance partnered with Google Cloud Consulting team to build a sophisticated Content Knowledge Graph (CKG) that addresses these challenges head-on. This solution leverages Google Cloud's advanced AI and data processing capabilities, including Gemini models, BigQuery, Vertex AI, and Google Cloud partner Neo4j, to ingest, process, extract, standardize, classify, and structure news data into a dynamic network of interconnected entities and relationships. The Content Knowledge Graph has dramatically improved search relevance, enhanced personalized content discovery, provided deeper contextual insights, increased user engagement, and improved scalability and efficiency.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-aside"&gt;&lt;dl&gt;
    &lt;dt&gt;aside_block&lt;/dt&gt;
    &lt;dd&gt;&amp;lt;ListValue: [StructValue([(&amp;#x27;title&amp;#x27;, &amp;#x27;Try Google Cloud for free&amp;#x27;), (&amp;#x27;body&amp;#x27;, &amp;lt;wagtail.rich_text.RichText object at 0x7f66dacdda60&amp;gt;), (&amp;#x27;btn_text&amp;#x27;, &amp;#x27;Get started for free&amp;#x27;), (&amp;#x27;href&amp;#x27;, &amp;#x27;https://console.cloud.google.com/freetrial?redirectPath=/welcome&amp;#x27;), (&amp;#x27;image&amp;#x27;, None)])]&amp;gt;&lt;/dd&gt;
&lt;/dl&gt;&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;strong style="vertical-align: baseline;"&gt;Tackling the content discovery and personalization challenge &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Glance's mission extends far beyond traditional content aggregation. As a platform serving hundreds of millions of users globally, Glance must deliver hyper-personalized content experiences that anticipate user interests and adapt to evolving preferences in real-time. The challenge was multifaceted:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Uncovering trends at scale&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Identifying emerging topics, viral content, and the complex relationships between different content categories across news, entertainment, sports, and lifestyle domains&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Enhancing personalized search&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Improving the accuracy and relevance of search results based on individual user behavior patterns, preferences, and contextual signals&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Providing intelligent context&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Offering users deeper understanding not just of individual pieces of content, but how different stories, events, and topics connect across the broader content ecosystem&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Scaling personalization&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Delivering these sophisticated experiences across 300+ million users while maintaining real-time responsiveness and relevance&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Manually analyzing and connecting the dots within this vast, multi-dimensional content landscape was simply not scalable. Glance needed an intelligent, automated approach that could understand content at both granular and contextual levels while powering personalized recommendations at unprecedented scale.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong style="vertical-align: baseline;"&gt;Building a Gemini-powered content intelligence engine&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Glance and Google Cloud Consulting team collaborated to architect and implement a comprehensive Content Knowledge Graph that transforms how content is understood, connected, and delivered. This sophisticated system leverages the full spectrum of Google Cloud's AI and data processing capabilities:&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_Hy0BxZW.max-1000x1000.png"
        
          alt="1"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="5it83"&gt;Architecture of Content Processing for Knowledge Graph Creation&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Intelligent content ingestion and processing&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – The system ingests content from diverse sources beyond news, including entertainment articles, sports updates, lifestyle content, and trending topics, storing them in BigQuery for efficient, scalable processing.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Advanced entity extraction and relationship mapping&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Using Gemini foundational models, the system extracts key entities (people, organizations, locations, events, brands) and identifies complex relationships between them across different content categories.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Entity standardization and knowledge linking&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Extracted entities are normalized using Gemini's advanced language understanding and linked to authoritative knowledge sources like Wikipedia, ensuring consistency and enabling sophisticated cross-content analysis.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Multi-dimensional content classification&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Gemini foundation models classify content into granular categories using the IAB content taxonomy while also identifying sentiment, urgency, and relevance signals for personalization.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Intelligent content summarization and tagging&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Gemini generates contextual tags, compelling short headlines, and category labels, enabling users to quickly grasp content essence while powering recommendation algorithms.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Dynamic Knowledge Graph construction&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – The extracted information is structured into a Neo4j graph database, creating a living, breathing network of interconnected entities, topics, relationships, and user interaction patterns.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Real-time trend analysis and prediction&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – The system integrates with external trend APIs and analyzes user engagement patterns to identify and predict trending topics, providing actionable insights for content curation.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Interactive analytics dashboard&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – NeoDash powers an interactive dashboard for monitoring trending content, analyzing entity relationships, and visualizing content performance across different user segments.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;Diagram 2:&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/original_images/2_QJAvdih.gif"
        
          alt="2"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="5it83"&gt;How entities extracted from one news article help identify related news articles&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;strong style="vertical-align: baseline;"&gt;Engineering for Global Scale and Performance&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Glance enhanced the Content Knowledge Graph solution to handle 50,000+ daily articles across multiple content categories. The engineering team implemented several critical optimizations:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Event-driven architecture transformation&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Migrating to a Kafka-based event-driven architecture with intelligent retries and asynchronous operations resulted in 4x throughput improvement, enabling real-time content processing at massive scale.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Graph database optimization&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Neo4j query optimization and indexing strategies drastically reduced query response times from seconds to milliseconds, enabling real-time content recommendations.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Kubernetes-native deployment&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Moving to managed Google Kubernetes Engine (GKE) with auto-scaling capabilities improved system reliability and resource utilization.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Strategic performance optimization&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Applying the 80-20 principle to focus on high-impact optimizations, coupled with Redis caching and Cloud Spanner for critical data, reduced processing latency by 80% and boosted recommendation coverage from under 60% to over 85%.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Intelligent load balancing&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Implementing smart load distribution across processing pipelines ensured consistent performance even during viral content spikes and peak traffic periods.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong style="vertical-align: baseline;"&gt; Business impact&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The Content Knowledge Graph has delivered measurable improvements across key business metrics:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Enhanced content discovery performance&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – CKG-powered content recommendations boosted Cards per Session (CPS) by 24%, directly improving user engagement and platform stickiness.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Significantly increased user engagement&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – More relevant and contextually aware content delivery resulted in higher click-through rates and a 5% increase in swiping sessions, particularly in the related content sections.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Real-time trend intelligence&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – Users now discover trending topics instantly across news, entertainment, sports, and lifestyle categories, with faster trend detection compared to previous systems.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Data-driven content strategy&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – The CKG provides comprehensive, actionable insights into content performance, user preferences, and emerging trends, enabling data-driven editorial and curation decisions.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Global scalability and efficiency&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; – The cloud-native architecture seamlessly handles Glance's ever growing global content pool while maintaining cost efficiency and performance.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong style="vertical-align: baseline;"&gt;Shaping the Future of content discovery&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The Glance Content Knowledge Graph transforms  raw, unstructured content into a sophisticated, interconnected knowledge network, and has  has empowered Glance to deliver truly engaging content experiences that anticipate user needs.The solution's success lies not just in its technical sophistication, but in its ability to enhance the human experience of content discovery – making every interaction with Glance more meaningful, relevant, and engaging.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;As content continues to proliferate and user expectations for personalization grow, the principles and technologies demonstrated in this project provide a blueprint for the future of intelligent content platforms. We're excited to see how Glance leverages this powerful foundation to further innovate in personalized content discovery and set new standards for user experience in the digital content ecosystem.&lt;/span&gt;&lt;/p&gt;
&lt;hr/&gt;
&lt;p&gt;&lt;sup&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;We’d like to give special thanks to the Glance Engineering team – Pradeep Tiwari, Himanshu Aggarwal, Krishna Yadav, and Aashirwad Kashyap, and the Google Cloud Consulting team - Parag Mhatre, Ashish Tendulkar, Neeraj Shivhare, Hem Anand – for their collaboration and expertise in delivering this project. &lt;/span&gt;&lt;/sup&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Thu, 10 Jul 2025 16:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/topics/customers/glance-builds-gemini-powered-knowledge-graph-with-google-cloud/</guid><category>Media &amp; Entertainment</category><category>Customers</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>From news to insights: Glance leverages Google Cloud to build a Gemini-powered Content Knowledge Graph (CKG)</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/topics/customers/glance-builds-gemini-powered-knowledge-graph-with-google-cloud/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Himanshu Aggarwal</name><title>Machine Learning Engineer, Glance</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Vijay Ram Surampudi</name><title>AI Consultant, Professional Services, Google Cloud</title><department></department><company></company></author></item><item><title>The AI lens: How Arpeely uses multimodality and BigQuery to revolutionize AdTech</title><link>https://cloud.google.com/blog/products/data-analytics/how-arpeely-uses-google-cloud-ai-for-smarter-ads/</link><description>&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Traditional programmatic advertising often misses the mark. Flat pricing, limited targeting, and a focus on immediate conversions over long-term customer value leave advertisers wanting more. At &lt;/span&gt;&lt;a href="https://www.arpeely.com/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Arpeely&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, we're changing the game in three ways by:&lt;/span&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li aria-level="1" style="list-style-type: decimal; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Putting our own money on the line for performance with a different business model.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: decimal; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Taking care of creatives and the funnel end-to-end.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: decimal; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Optimizing on lifetime value.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Arpeely is on a mission to transform the way advertisers connect with their audiences online, fueled by one simple belief: To truly understand the internet, you need to see it. Not just the text and code — but the images, the emotions, the nuances that make each web page unique. That's why we're doubling-down on our ad-tech platform on Google Cloud.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;In this post, we’ll take a closer look at how Google Cloud is helping us to leverage the power of multimodality and AI in products like BigQuery and Gemini to make media buying smarter, more efficient, and laser-focused on long-term value. &lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong style="vertical-align: baseline;"&gt;Transforming how Arpeely ‘sees’ the internet with multimodal AI&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Our AI algorithms analyze massive datasets to identify the users most likely to become loyal, high-value customers for our clients, but we need to understand the web on a deeper level to do that effectively. That’s where multimodality — the ability to process multiple types of data — comes in. We don't just look at text; we use Google Cloud's powerful multimodal AI, including &lt;/span&gt;&lt;a href="https://developers.google.com/learn/pathways/solution-ai-gemini-images" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Gemini Pro Vision&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; and &lt;/span&gt;&lt;a href="https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#sundar-note" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Gemini Flash 1.5&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; to analyze web page screenshots, extracting visual information that enriches our understanding in real time. This allows us to cluster billions of websites with remarkable speed and precision, uncovering connections and insights that traditional methods would miss.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-aside"&gt;&lt;dl&gt;
    &lt;dt&gt;aside_block&lt;/dt&gt;
    &lt;dd&gt;&amp;lt;ListValue: [StructValue([(&amp;#x27;title&amp;#x27;, &amp;#x27;$300 in free credit to try Google Cloud data analytics&amp;#x27;), (&amp;#x27;body&amp;#x27;, &amp;lt;wagtail.rich_text.RichText object at 0x7f66d7795f10&amp;gt;), (&amp;#x27;btn_text&amp;#x27;, &amp;#x27;Start building for free&amp;#x27;), (&amp;#x27;href&amp;#x27;, &amp;#x27;http://console.cloud.google.com/freetrial?redirectPath=/bigquery/&amp;#x27;), (&amp;#x27;image&amp;#x27;, None)])]&amp;gt;&lt;/dd&gt;
&lt;/dl&gt;&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;h3&gt;&lt;strong style="vertical-align: baseline;"&gt;Taming the data deluge with BigQuery and Pub/Sub&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Building an AI-powered, multimodal ad platform requires handling a truly staggering amount of data. With &lt;/span&gt;&lt;a href="https://cloud.google.com/bigquery"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;BigQuery&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, we can constantly crunch numbers, analyze user behavior, and generate insights from over 25 petabytes of compressed data, fueling our real-time bidding engine.  &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;But AI demands speed as well as scale. That's why we rely on&lt;/span&gt;&lt;a href="https://cloud.google.com/pubsub/docs/overview"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt; Pub/Sub&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, Google Cloud's real-time messaging service, to keep the information flowing. Pub/Sub acts like our central nervous system, connecting our microservices and ensuring that our AI algorithms have the up-to-the-second data they need to make smart decisions.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong style="vertical-align: baseline;"&gt;Going beyond keywords: BigQuery vector search for unprecedented ad relevance&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Traditional ad targeting relies on keywords, which are blunt instruments in the nuanced world of online behavior. Apeely takes a smarter approach by using the &lt;/span&gt;&lt;a href="https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-generate-embedding"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;ML.GENERATE_EMBEDDING&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; function within &lt;/span&gt;&lt;a href="https://cloud.google.com/bigquery/docs/bqml-introduction"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;BigQuery ML&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; to generate embeddings for the webpage in real time. By representing web pages and ad creatives as vectors in a multi-dimensional space and using &lt;/span&gt;&lt;a href="https://cloud.google.com/bigquery/docs/vector-search-intro"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;BigQuery vector search&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, we can understand the semantic relationships between them in real time. This means we can deliver highly contextual ads that go beyond simple keyword matching, resulting in greater relevance, higher click-through rates, and better campaign performance for our clients, with a 15% uplift in revenue.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_f0K0sL8.max-1000x1000.png"
        
          alt="Image 1"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;h3&gt;&lt;strong style="vertical-align: baseline;"&gt;Making ads blend in and stand out with Visual Question Answering&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Our commitment to understanding the visual web goes even further with Visual Question Answering (VQA). By training AI models to "see" and interpret images, we can extract detailed information about web pages, such as dominant colors, layouts, and even emotional tones. Our VQA models enable us to dynamically adjust the look and feel of our ads to match the context of each page, creating a more seamless and engaging experience for users, resulting in a 28% increase in user engagement.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/2_vo8JgKD.max-1000x1000.png"
        
          alt="Image 2"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="ugdcg"&gt;Image created using Gemini&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;h3&gt;&lt;strong style="vertical-align: baseline;"&gt;The Google Cloud advantage: Building the Future of AdTech&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Building Arpeely on Google Cloud has been instrumental in bringing our AI-powered vision to life. The platform's scalability, serverless offerings, and unified ecosystem give us the agility and efficiency we need to innovate at a rapid pace.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;We're incredibly excited about the future of ad-tech and the role AI will continue to play. With Google Cloud as our trusted partner, we're confident in our ability to lead the way toward a more intelligent, effective, and value-driven advertising landscape.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Get started with &lt;/span&gt;&lt;a href="https://cloud.google.com/bigquery/docs/generate-multimodal-embeddings"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;multimodality use cases in BigQuery&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; today.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Fri, 21 Mar 2025 16:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/data-analytics/how-arpeely-uses-google-cloud-ai-for-smarter-ads/</guid><category>AI &amp; Machine Learning</category><category>Customers</category><category>Media &amp; Entertainment</category><category>Data Analytics</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>The AI lens: How Arpeely uses multimodality and BigQuery to revolutionize AdTech</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/data-analytics/how-arpeely-uses-google-cloud-ai-for-smarter-ads/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Roee Sheffer</name><title>VP of Research &amp; Development, Arpeely</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Moran Cohen-Koller</name><title>Director of Data, Arpeely</title><department></department><company></company></author></item><item><title>Hakuhodo Technologies: The transformative impact of SRE</title><link>https://cloud.google.com/blog/products/devops-sre/how-hakuhodo-technologies-transforms-its-organization-with-sre/</link><description>&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;a href="https://www.hakuhodo-technologies.co.jp/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Hakuhodo Technologies&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, a specialized technology company of the &lt;/span&gt;&lt;a href="https://www.hakuhodody-holdings.co.jp/english/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Hakuhodo DY Group&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; — one of Japan’s leading advertising and media holding companies — is dedicated to enhancing our software development process to deliver new value and experiences to society and consumers through the integration of marketing and technology. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Our IT Infrastructure Team at Hakuhodo Technologies operates cross-functionally, ensuring the stable operation of the public cloud that supports the diverse services within the Hakuhodo DY Group. We also provide expertise and operational support for public cloud initiatives.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Our value is to excel in the cloud and infrastructure domain, exhibiting a strong sense of ownership, and embracing the challenge of creating new value.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Background and challenges&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The infrastructure team is tasked with developing and operating the application infrastructure tailored to each internal organization and service, in addition to managing shared infrastructure resources.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Following the principles of platform engineering and site reliability engineering (SRE), each team within the organization has adopted elements of SRE, including the implementation of post-mortems and the development of observability mechanisms. However, we encountered two primary challenges:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;As the infrastructure expanded, the number of people on the team grew rapidly, bringing in new members from diverse backgrounds. This made it necessary to clarify and standardize tasks, and provide a collective understanding of our current situation and alignment on our goals.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;We mainly communicate with the app team through a ticket-based system. In addition to expanding our workforce, we have also introduced remote working. As a result, team members may not be as well-acquainted as before. This lack of familiarity could potentially cause small misunderstandings that can escalate quickly.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;As our systems and organization expand, we believe that strengthening common understanding and cooperative relationships within the infrastructure team and the application team is essential for sustainable business growth. This has become a core element of our strategy.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;We believe that fostering an SRE mindset among both infrastructure and application team members and creating a culture based on that common understanding is essential to solving the issues above. To achieve this, we decided to implement the "SRE Core" program by Google Cloud Consulting, which serves as the first step in adopting SRE practices.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Change&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;First, through the "SRE Core" program, we revitalized communication between the application and infrastructure teams, which had previously had limited interaction. For example, some aspects of the program required information that was challenging for infrastructure members to gather and understand on their own, making cooperation with the application team essential.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Our critical user journey (CUJ), one of the SRE metrics, was established based on the business requirements of the app and the behavior of actual users. This information is typically managed by the app team, which frequently communicates with the business side. This time, we collaborated with the application team to create a CUJ, set service level indicators (SLIs) and service level objectives (SLOs) which included error budgets, performed risk analysis, and designed the necessary elements for SRE.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;This collaborative work and shared understanding served as a starting point. As we continued to build a closer working relationship even after the program ended, with infrastructure members also participating in sprint meetings that had previously been held only for the app team.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/Hakuhodo_-_Next_Tokyo.max-1000x1000.png"
        
          alt="Image_1"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Additionally, as an infrastructure team, we systematically learned when and why SRE activities are necessary, allowing us to reflect on and strengthen our SRE efforts that had been partially implemented.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;For example, I recently understood that the purpose of postmortems is not only to prevent the recurrence of incidents but also to gain insights from the differences in perspectives between team members. Learning the purpose of postmortems changed our team’s mindset. We now practice immediate improvement activities, such as formalizing the postmortem process, clarifying the creation of tickets for action items, and sharing postmortem minutes with the app team, which were previously kept internal.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;We also reaffirmed the importance of observability to consistently review and improve our current system. Regular meetings between the infrastructure and application teams allow us to jointly check metrics, which in turn helps maintain application performance and prevent potential issues.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;By elevating our previous partial SRE activities and integrating individual initiatives, the infrastructure team created an organizational activity cycle that has earned more trust. This enhanced cycle is now getting integrated into our original operational workflows.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Future plans&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;With the experience gained through the SRE Core program, the infrastructure team looks forward to expanding collaboration with application and business teams and increasing proactive activities. Currently, we are starting with collaborations on select applications, but we aim to use these success stories to broaden similar initiatives across the organization.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;It is important to remember that each app has different team members, business partners, environments, and cultures, so SRE activities must be tailored to each unique situation. We aim to harmonize and apply the content learned in this program with the understanding that SRE activities are not the goal, but are elements that support the goals of the apps and the business.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Additionally, our company has a Cloud Center of Excellence (CCoE) team dedicated to cross-organizational activities. The CCoE manages a portal site for company-wide information dissemination and a community platform for developers to connect. We plan to share the insights we've gained through these channels with other respective teams within our group companies. As the CCoE's internal activities mature, we also intend to share our knowledge and experiences externally.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Through these initiatives, we hope to continue our activities with the hope that internal members — beyond the CCoE and infrastructure organizations — take psychological safety into consideration during discussions and actions.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Supplement: Regarding psychological safety&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;At our company, we have a diverse workforce with varying years of experience and perspectives. We believe that ensuring psychological safety is essential for achieving high performance.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;When psychological safety is lacking, for instance, if the person delivering bad news is blamed, reports tend to become superficial and do not lead to substantive discussions.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;This issue can also arise from psychological barriers, such as the omission of tasks known only to experienced employees, leading to problems caused by the fear of asking for clarification.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;In a situation where psychological safety is ensured, we focus on systems rather than individuals, viewing problems as opportunities. For example, if errors occur due to manual work, the manual process itself is seen as the issue. Similarly, if a system failure with no prior similar case arises, it is considered an opportunity to gain new knowledge.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;By adopting this mindset, fear is removed from the equation, allowing for unbiased discussions and work.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;This allows every employee to perform at their best, regardless of their years of experience. &lt;/span&gt;&lt;span style="vertical-align: baseline;"&gt;Of course, this is not something that can be achieved through a single person. It will require a whole team or organization to recognize this to make it a reality.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Mon, 12 Aug 2024 16:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/devops-sre/how-hakuhodo-technologies-transforms-its-organization-with-sre/</guid><category>Media &amp; Entertainment</category><category>DevOps &amp; SRE</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Hakuhodo Technologies: The transformative impact of SRE</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/devops-sre/how-hakuhodo-technologies-transforms-its-organization-with-sre/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Yoshimasa Suzuki</name><title>CCoE Team Leader, Hakuhodo Technologies Inc.</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Takumi Kondo</name><title>CCoE Team Tech Lead, Hakuhodo Technologies Inc.</title><department></department><company></company></author></item><item><title>Building the cloud-native broadcast media supply chain with Google Cloud</title><link>https://cloud.google.com/blog/products/media-entertainment/how-cloud-enables-the-media-supply-chain/</link><description>&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The media supply chain is undergoing a major transformation in the digital age, and Google Cloud is leading the charge by partnering with key media companies and ISVs to build cloud-native solutions. This shift will enable broadcasters to streamline operations, reduce costs, and deliver more engaging content to global audiences. With a focus on openness, efficiency, and AI integration, Google Cloud is helping media companies unlock the full potential of the cloud for their future success.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The media supply chain brings viewers their favorite movies, TV shows and live sports. From the moment a content creator conceives of an idea, the entire media supply chain kicks in to create, manage, and deliver digital media to its destination, whether that's a streaming service, Set-top box, movie theater, or a music player. The media supply chain is complex and ever-evolving, and is essential for the success of the modern media enterprise. And with the rise of streaming services, social media, and direct-to-consumer business models, the focus of the media supply chain is on audiences and the end user, so consumers have more control over what content they watch and listen to. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;There are four key stages in the media supply chain:&lt;/span&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li aria-level="1" style="list-style-type: decimal; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Creation: &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;The initial creation of the media, such as filming a movie or live sports event; this is where the “magic” is created.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: decimal; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Media processing and quality checks: &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;Automatic and manual quality checks help ensure content is up to the organization’s standards, and then transformed into a digital format that can be easily distributed. A processing engine, meanwhile, converts camera and audio feeds into digital formats for easy updates. &lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: decimal; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Editing:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; Content is fine-tuned, with edits ranging from color grading for a blockbuster movie to on-the-spot replay shots for a live sports game. Steps often include color correction, and fixing audio quality. &lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: decimal; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Delivery and distribution:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; The content is prepared for delivery involving transcoding, packaging and in many cases embedding closed captions. Once ready the content is delivered to viewers via satellite to terrestrial, and increasingly, digital.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The goal of media CTOs has always been to have a well-defined process to ensure that media is created, managed, and delivered efficiently and effectively. However, building and managing a media supply chain can be complex, and involve many different people and organizations. In fact, media supply chains are often patched together as media companies expand and acquire new companies. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;There are still other challenges with media supply chains, like moving them to the cloud. For example, in the past, broadcasters didn’t have to think about the cost of transferring content (i.e. egress), but with cloud, the cost can be significant. Other challenges include:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Increased complexity of monitoring across on-prem and cloud&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Time-alignment of systems in the cloud and on-prem &lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;System availability in the cloud and cut over between on-prem and cloud&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Format standardization for content and metadata between cloud and on-prem and transmuxing and transcoding when transitioning from one to the other&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;When media companies initially transitioned to the cloud, the focus was often on quick wins and immediate cost savings. While this approach offered short-term benefits, it didn't fully address the need for a comprehensive overhaul of the media delivery process. This led to missed opportunities for long-term optimization and innovation within the cloud environment. Moving forward, a more holistic approach that considers the entire media workflow can unlock greater efficiency and cost-effectiveness for years to come.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;At the same time, television is changing. At Google Cloud, we strongly believe that the future of television will revolve around producing live, local, and personalized experiences that cater to global audiences. There is an opportunity for broadcasters to engage audiences longer and more deeply in immersive content formats that respond to their interests, both implicitly and explicitly. But to meet these requirements would mean more streamlined production processes, facilities, with AI-driven automation to produce more content and viewing choices than ever before. A cloud-centric media supply chain will be the driver of these innovations.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Toward a cloud-native media supply chain&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;At Google Cloud, we believe that moving the media supply chain to the cloud isn’t just about moving on-premises workloads to the cloud; it’s about running applications in a cloud-native fashion, to create high-quality, reliable software systems that can be delivered quickly and efficiently, while also enabling agility, observability, and automation. Utilizing DevOps and SRE methodology in the development process is essential to this process.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;A key component of the media supply chain is that media software providers — ISVs — complete the supply chain for broadcasters, and need to work in a cloud-native way to enable cloud transformation. These media ISVs provide specialist applications, and in many cases architectures, and unfortunately, some of these ISVs are not cloud-native today.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Google Cloud for Media &amp;amp; Entertainment's mission is to &lt;/span&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;"Empower media organizations to transform audience experiences through innovation.”&lt;/span&gt;&lt;span style="vertical-align: baseline;"&gt; We believe that the cloud can be truly transformative in shaping the future of the audience experience. Google Cloud has several areas of focus to allow customers to leverage the cloud for their media supply chain:&lt;/span&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li aria-level="1" style="list-style-type: decimal; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Empower the existing ecosystem. &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;We recognize the challenge for broadcasters to retrain operations staff on new broadcast applications. We will continue working with leading ISVs to enable their applications on Google Cloud so that they can take advantage of our capabilities and services to provide powerful, cloud-based products.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: decimal; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Focus on openness. &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;The broadcast space has a wide diversity of content delivery standards to choose from. Google Cloud is committed to enabling customers to deliver content to the cloud using the standards of their choice. &lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: decimal; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Invest in application efficiency.&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; Google Cloud is focused on making existing ISV applications more efficient on Google Cloud, highlighting the flexibility and efficiency of underlying infrastructure.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;To help, we’ve established a Google Cloud Media Supply Chain council that works in partnership with leading strategic media customers to define the future state of cloud-enabled media supply chains. Our goal is to help advance media software and hardware vendors’ roadmaps to meet modern media companies requirements. Current Google Cloud Media Supply Chain Council members include Grupo Globo and TelevisaUnivision. The council also works closely with key ISVs that will define the future of the media supply chain and assist them in clarifying cloud requirements and providing engineering support.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Building the future of media supply chains: A three-step guide&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;To build a cloud-based media supply chain for the future, there are three key steps.&lt;/span&gt;&lt;/p&gt;
&lt;p role="presentation"&gt;&lt;strong&gt;&lt;span style="vertical-align: baseline;"&gt;1. &lt;/span&gt;&lt;/strong&gt;&lt;strong style="vertical-align: baseline;"&gt;Envisioning the blueprint for transformation&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Google Cloud, along with key media companies, has been working on the future of a media supply chain that combines the flexibility of cloud infrastructure and the advanced capabilities of data platforms and AI/ML, including generative AI. A version of the future of the media supply chain is shown below.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/image1_WEIElOr.max-1000x1000.png"
        
          alt="image1"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;As the blueprint depicts, the media supply chain of the future will allow media companies to identify key elements of their supply chain first and then select key ISVs to innovate on their platform. We believe there are certain components of the media supply chain that broadcasters should directly manage, such as metadata format and schema, choice of ISV vendors and specific customizations.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;In the media supply chain ecosystem, we work with customers to build the green blocks in the graphic above, including providing AI/ML services at scale to enable media search and discovery. In parallel, we are working toward an ecosystem where our ISV partners (in the blue blocks) provide microservices that enable different functions of the media supply chain on-demand, so it can scale horizontally and function reliably. We are at the start of the journey, but we are rapidly making progress in terms of changing how our ISV partners think about the cloud.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;At Google Cloud, we are committed to providing secure, highly scalable, available, and reliable infrastructure; high-performance storage, network, and compute; along with reasonable cost and value-added AI/ML APIs. All of this in a secure environment that safeguards against unauthorized access and IP theft, where data and data analytics provide real-time insights into content, content metadata, alerts, and monitoring systems in a single pane of glass. &lt;/span&gt;&lt;/p&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;2. Selecting cloud-ready partners&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;When selecting ISV partners, media companies should consider both functional requirements, cloud maturity, and AI readiness.&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Infrastructure and licensing. &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;One of the cloud’s core value propositions is the ability to turn services on and off on demand. Media companies should give preference to ISV applications that are composed of microservices running on Kubernetes clusters or serverless functions, rather than monolithic code running on virtual machines. In addition to the technical infrastructure, the business license and Terms of Service itself also need to be scalable. ISVs that offer hourly or even by-the-minute licensing should be preferred over static and fixed licensing. Dynamic licensing allows customers to scale services without having to worry about obtaining new licenses, license keys, or contacting support teams from the ISV.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Scalability.&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; The ability of the ISV to scale, especially horizontally, is of great importance. This is relevant as vertical scaling has limits. Virtual machines can only grow so big before they run into performance bottlenecks. It is also important to note that horizontal scalability enables scaling of not just CPUs, but also the network (using multiple NICs across nodes), memory, and storage.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Deployment and monitoring. &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;The ISV’s application must be easy to deploy in the cloud, either via the Google Cloud Marketplace or using DevOps scripts that are easy to run and operate. ISV applications should also be able to be monitored and provide easy methods to integrate with cloud-native monitoring dashboards. This enables operational teams to have single-pane-of-glass visibility across cloud infrastructure, ISV applications, content, and subscribers.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Security. &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;While it is imperative that ISVs focus on operational ease of use, they must not lose sight of security. ISV applications must adhere to best practices in terms of cloud security and undergo a rigorous test for vulnerabilities. A key deliverable for customers from ISVs should include architectural patterns and options that are rooted in best-in-class security paradigms.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Business model and stability. &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;The size of the ISV, the stability of their business models and revenue streams, and a clear vision for incorporating AI technologies into their products are critical selection factors. As most broadcasters work with ISVs for many years, understanding their future vision and business model is important to ascertain the level of technical investment ISVs can make into their products.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;3. Supporting cloud-native application development&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Building a long-term roadmap with key technology partners is crucial for migrating media supply chains to the cloud, helping to ensure that all stakeholders are aligned on the goals and objectives of the migration, and that the right solutions are chosen to meet those goals. When evaluating an ISV as part of your cloud-native media supply chain, look for solutions with the following characteristics: workload consolidation, high availability, integration of disparate systems, AI/ML integration, and adoption of DevOps best practices.&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Avoid fragmented workloads. &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;Cloud computing gives customers the ability to onboard new workloads and services on demand. However, it is important to avoid having a fragmented workload spread across on-premises and cloud environments. But make a decision if a specific workload is best suited to run on-premises or on cloud. Fragmented workloads can lead to increased costs and complexity, as well as performance and security issues. &lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Ensure reliability and high availability. &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;Broadcast workloads demand high reliability and availability. Broadcast workloads expect 99.999% (5 9’s) reliability which still translates to 4 minutes and 19 seconds of downtime a month. A 99.99% (4 9’s) reliability translates to 43 minutes and 12 seconds of downtime a month. ISVs need to architect for availability in cloud environments. &lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Integrate atomic workloads. &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;Another aspect of cloud migration is integrating self-contained workloads that span across multiple partners, VPCs, and sometimes between on-premises and the cloud. A key challenge in integration is the lack of standardization of transport signals, especially for video. There is also the added complexity of ensuring video and audio are in sync and timed accurately.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Enabling AI/ML integration. &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;Cloud-based generative AI and services are increasingly proving to be a valuable asset in simplifying the media supply chain and improving the quality of audience experience. We already see opportunity with ISVs using AI to manage content flows and enrich content using subtitles, dubbing and other content or metadata enrichment services.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Encouraging DevOps and monitoring best practices. &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;ISVs need to adopt DevOps and monitoring best practices. This helps to ensure that applications are deployed and developed reliably, and that they can be seamlessly upgraded, monitored, and controlled securely.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Google Cloud works closely with key media partners to move atomic self-contained workloads to the cloud and to develop architecture patterns that support high availability, common standards and monitoring best practices.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Beyond lift and shift: TelevisaUnivision's cloud-native media transformation &lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;TelevisaUnivision is migrating its media workloads to the cloud to take advantage of the latest technologies and to avoid infrastructure obsolescence. The company is working with Google Cloud to make the migration a success. Migrating to the cloud is not a quick or easy process, but it is proving to be a worthwhile investment for TelevisaUnivision. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;“Google’s Content indexing will revolutionize the way we create and consume media. In the future, we will be able to find and use any piece of content ever created, regardless of where it is stored or who owns it. This will open up a world of possibilities for creators and consumers alike," said Marcos Obadia, SVP Global Engineering and Media Technology at TelevisaUnivision.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Likewise, moving the media supply chain to the cloud offers broadcasters several strategic advantages, said Ralf Jacob, EVP Broadcast Operations and Technology, TelevisaUnivision. “It enhances efficiency through automation, enables scalability for growing content needs, facilitates global distribution and localization, leads to cost savings, provides consolidated data insights and offers robust disaster recovery options” Ultimately, he added, “cloud-based media supply chains empower TelevisaUnivision to thrive in the evolving media landscape by optimizing operations and enhancing their ability to reach and engage a wider audience. For all of this to come together, you need the right partners.”&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The key learnings from TelevisaUnivision on the migration are:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Don't plan for a lift and shift. Instead, think about how the cloud can transform the media supply chain.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Be patient and be open to new ideas. The cloud is constantly evolving, so companies need to be willing to adapt.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Partner with ISVs to make sure they understand requirements and the tools and hyperscalers that they can take advantage of.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Transforming media with cloud and AI&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;“The media and entertainment industry is experiencing a pivotal transformation as companies improve the way they run operations, produce content and deliver superior customer experiences,” said Anil Jain, Global Managing Director, Strategic Consumer Industries, Google Cloud. “Generative AI is one of the biggest driving forces for this transformation and in order to realize its full potential media companies, hyperscale cloud providers and specialized media vendors need to closely collaborate.”&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Cloud-based media supply chains can help media companies reduce costs, increase agility, and improve quality. By moving media supply chains to the cloud, media companies can eliminate the need to invest in and maintain on-premises infrastructure, which can lead to significant cost savings. Additionally, cloud-based media supply chains are more scalable and flexible than on-premises solutions, allowing media companies to quickly adapt to changes in the market, such as new content formats or distribution channels. Finally, cloud-based media supply chains offer access to powerful tools and services that can help media companies improve the quality of their content.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Google Cloud is proud to partner with media companies, helping them to make the move to the cloud and provide industry-specific tools that address customer’s most pressing challenges. &lt;/span&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Thu, 30 May 2024 16:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/media-entertainment/how-cloud-enables-the-media-supply-chain/</guid><category>Media &amp; Entertainment</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Building the cloud-native broadcast media supply chain with Google Cloud</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/media-entertainment/how-cloud-enables-the-media-supply-chain/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Anshul Kapoor</name><title>Head of Broadcast Solutions, Google Cloud</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Jay Cherian</name><title>Solution Architect - Media &amp; Entertainment, Google Cloud</title><department></department><company></company></author></item><item><title>Paramount+: A streaming powerhouse with limitless entertainment</title><link>https://cloud.google.com/blog/products/media-entertainment/paramount-global-built-its-streaming-platform-on-google-cloud/</link><description>&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;a href="https://www.paramountplus.com/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Paramount+&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; is a treasure trove of streaming entertainment for a global audience. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;With a click, swipe, or voice command, viewers have instant access to iconic films like "The Godfather" and "Top Gun", television classics like "Star Trek" and "Survivor," and modern hits like "Yellowstone,” "1883," and "Halo." &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;In addition to its immense library of filmed entertainment, Paramount+ also brings the excitement of live sports straight to consumers. Whether watching on connected televisions, web browsers, or mobile devices — and sometimes switching between them — viewers watched UEFA soccer, March Madness, the NFL, college football, and a variety of other sports this past year. This includes the 2024 Super Bowl LVIII, the most watched event in recent history with 123.4 million viewers across all platforms and the most-streamed Super Bowl in history, led by a record-setting audience on Paramount+.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;In order to provide a global audience with streaming content available 24/7 — with huge demand spikes during those major live events — Paramount+ needed a robust technology stack that could provide speed, agility, security and global reach with zero downtime. To meet these diverse technology challenges, Paramount Global chose Google Cloud as the platform on which to build its streaming future. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Zero downtime was not only a technical goal but also the commitment to the business team to ensure subscribers can get consistent and seamless experience. Serving global subscribers requires robust architecture running on a scalable platform along with a well-trained team.  &lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;The tech behind the curtain: Paramount+ and Google Cloud &lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The technology stack for &lt;/span&gt;&lt;a href="https://www.paramountplus.com/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Paramount+&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; had many components specially tailored to the needs of media and entertainment. The team adopted a services-based architecture powered by &lt;/span&gt;&lt;a href="https://cloud.google.com/kubernetes-engine?utm_source=google&amp;amp;utm_medium=cpc&amp;amp;utm_campaign=na-US-all-en-dr-bkws-all-all-trial-e-dr-1707554&amp;amp;utm_content=text-ad-none-any-DEV_c-CRE_665665924789-ADGP_Hybrid+%7C+BKWS+-+MIX+%7C+Txt-Containers-Google+Kubernetes+Engine-KWID_43700077212829823-aud-2232802565252:kwd-335784956140&amp;amp;utm_term=KW_kubernetes+google-ST_kubernetes+google&amp;amp;gad_source=1&amp;amp;gclid=Cj0KCQjw_-GxBhC1ARIsADGgDjt3ZRdXF0QtPgPXYTKKifuBcCRszC-JR_V8rX5y02d1pIqIq15793YaAlU8EALw_wcB&amp;amp;gclsrc=aw.ds&amp;amp;e=48754805"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Google Kubernetes Engine&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; (GKE) for flexibility, stability, scalability and quick updates. This allowed the team to improve development and operational velocity and performance. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Subscriber growth and fast changing business needs led to exploring different Google Cloud services including &lt;/span&gt;&lt;a href="https://cloud.google.com/products/compute"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Google Cloud Compute&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; Engine, &lt;/span&gt;&lt;a href="https://cloud.google.com/bigtable"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Bigtable&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, &lt;/span&gt;&lt;a href="https://cloud.google.com/pubsub/docs/overview"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Pub/Sub,&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; &lt;/span&gt;&lt;a href="https://cloud.google.com/products/operations"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Cloud Ops Suite&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, &lt;/span&gt;&lt;a href="https://cloud.google.com/network-intelligence-center"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Network Intelligent Center&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; and &lt;/span&gt;&lt;a href="https://cloud.google.com/security/products/armor"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Cloud Armor&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; to streamline technical operations. The architecture team, in collaboration with Google Cloud engineering, evaluated different products that could support the business SLA and security needs.  &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Security and service availability is critical for any customer-facing applications. In order to prevent &lt;/span&gt;&lt;a href="https://cloud.google.com/blog/products/identity-security/google-cloud-mitigated-largest-ddos-attack-peaking-above-398-million-rps?e=48754805"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;DDoS attacks&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; from disrupting the streaming experiences of tens of millions of users, Paramount+ uses the &lt;/span&gt;&lt;a href="https://cloud.google.com/security/products/armor?e=48754805"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Google Cloud Armor Managed Protection&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; along with other industry standard security tools. And to ensure zero downtime across its global platform, the Paramount+ technology team applied the DevSecOps process to architecture to integrate security from the start of the development process. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;To help ensure smooth operation and rapid updates, they adopted &lt;/span&gt;&lt;a href="https://cloud.google.com/sre"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Site Reliability Engineering&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; (SRE) practices in collaboration with Google Cloud. This approach hinges on automation, testing, proactive monitoring, and seamless teamwork. In addition to adopting &lt;/span&gt;&lt;a href="https://sre.google/sre-in-cloud/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;SRE&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; practices, the Paramount+ technology team utilizes a multi-zonal approach for resilience. This ensures true geo-redundancy, an active-active configuration that spans multiple Google Cloud regions. Through this strong partnership, Paramount+ is able to ensure exceptional performance, especially during high-traffic events like the Super Bowl.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Paramount+ engineers partnered closely with Google Cloud to establish guiding principles for this complex migration:&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Multi-regional journey: &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;Paramount+ and Google Cloud teams collaborated for more than a year to ensure their infrastructure can scale into multiple regions. This journey happened without taking any downtime or downgrading end-user experience. The Paramount+ team is able to ensure that adding a new region should take only a matter of days, not years. Paramount+ had already adopted stateless principles to ensure optimal scale and usage of Google Cloud resources prior to becoming multi-regional. This strategic shift helped prepare Paramount+ to deliver a seamless experience while ensuring security and zero data loss.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Scalable architecture: &lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt;Paramount+ has adopted a distributed database running across multiple regions to ensure data consistency. Paramount+ and Google Cloud strive to maintain elasticity in the architecture to handle spiky traffic either during live events or for serving hit shows. This ensures the infrastructure can be both easily pre-scaled and autoscales. In addition to CI/CD principles, the Paramount+ team is also adopting a &lt;/span&gt;&lt;a href="https://cloud.google.com/architecture/application-deployment-and-testing-strategies#choosing_the_right_strategy"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;blue-green deployment approach&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; to provide consistent experience to the end user and reduce risk.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Regional independence:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; Bringing services closer to users while mitigating any natural disaster that may interrupt services was critical. This active-active multi-regional enabled Paramount+ to support a high number of daily active users and unprecedented amount of traffic during large sporting events. There is a strict policy to ensure that no region is dependent on any other region. This goes all the way from the content delivery network (CDN) to the databases. Paramount+ team has ensured that adding or removing scale in a region does not impact the overall end-user experience.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Operational consistency:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; The Paramount+ SRE team set the standard guidelines and process to keep the regions homogenous for simplified management and addressing the business needs in timeline fashion. Consistent processes around security, audit, and deployment were put in place so that end users don’t have to know anything about the regions. &lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;strong style="vertical-align: baseline;"&gt;Strict objectives:&lt;/strong&gt;&lt;span style="vertical-align: baseline;"&gt; The team had a goal to meet aggressive recovery-time-objective (RTO) and recovery-point-objective (RPO) targets. Having a strict service-level agreement and delivering on it was a critical aspect for supporting 71 million subscribers and having truly 24/7 streaming services. Strict SLAs ensured zero downtime, low latency, and robust monitoring and observability framework so the team could proactively address any issues that may impact end users.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Migrating to a multi-region setup meant rethinking deployment processes, automation tools, and the entire underlying database, all while upholding the established RTO and RPO. By working with Google Cloud, Paramount+ was able to transition from a multi-zonal architecture to an active-active multi-regional architecture and build on its world-class streaming service.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_OmOvZzD.max-1000x1000.jpg"
        
          alt="1"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;The future is bright&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The media landscape is dynamic, and Paramount+ has adapted with a technology platform that has scaled to their global audience. Achieving broadcast quality across platforms and devices is non-trivial, and the teams work hard to achieve this in close collaboration with the Google Cloud team. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;With this foundation, Paramount+ aims to continue optimizing and innovating with new technologies, like generative AI, all the while keeping viewers entertained without interruption and delivering a world-class customer experience. &lt;/span&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Thu, 09 May 2024 13:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/media-entertainment/paramount-global-built-its-streaming-platform-on-google-cloud/</guid><category>Customers</category><category>Media &amp; Entertainment</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Paramount+: A streaming powerhouse with limitless entertainment</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/media-entertainment/paramount-global-built-its-streaming-platform-on-google-cloud/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Shiva Paranandi</name><title>SVP, Cloud Advancement &amp; SRE, Paramount</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Ashutosh Tripathi</name><title>Principal Architect, Google Cloud</title><department></department><company></company></author></item><item><title>Upgrading Immersive Stream for XR to Unreal Engine 5.3</title><link>https://cloud.google.com/blog/topics/telecommunications/immersive-stream-for-xr-now-supports-unreal-engine-5-3/</link><description>&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Google Cloud's &lt;/span&gt;&lt;a href="https://cloud.google.com/immersive-stream"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Immersive Stream for XR&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; is a powerful cloud-based solution for rendering and streaming high-quality XR experiences. And now it’s getting even better with its integration of Unreal Engine 5.3. This latest upgrade unlocks a wealth of new features that empower developers to push the boundaries of immersive experiences.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;New features in Unreal Engine 5.3 will significantly improve development with Immersive Stream for XR. Some of the new features include: &lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Improved &lt;/span&gt;&lt;a href="https://dev.epicgames.com/documentation/en-us/unreal-engine/lumen-global-illumination-and-reflections-in-unreal-engine" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Lumen&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; and &lt;/span&gt;&lt;a href="https://dev.epicgames.com/documentation/en-us/unreal-engine/nanite-virtualized-geometry-in-unreal-engine?application_version=5.3" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Nanite&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; capabilities deliver better visual results for hyper-realistic lighting and stunningly detailed environments with enhanced performance.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;The new Material Layering System simplifies creating complex, layered materials. &lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;World Partition streamlines the management and streaming of massive open worlds.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li aria-level="1" style="list-style-type: disc; vertical-align: baseline;"&gt;
&lt;p role="presentation"&gt;&lt;span style="vertical-align: baseline;"&gt;Smoother frame rates, faster load times, and greater efficiency allow for more ambitious projects.&lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_-_ISXR_UE_5.3_DEMOS.max-1000x1000.png"
        
          alt="1 - ISXR UE 5.3 DEMOS"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="r6le5"&gt;Immersive Stream for XR + Unreal 5.3 Demos&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;In addition to the Unreal 5.3 updates, we’ve also upgraded the &lt;/span&gt;&lt;a href="https://github.com/GoogleCloudPlatform/immersive-stream-for-xr-templates" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Immersive Stream for XR template project&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; with optimized blueprints, simplified logic, easier-to-use events for mode switching, and new demos. Best of all, you can now directly integrate template files into your existing Unreal projects, streamlining the creation process for developers and allowing them to craft Immersive Stream for XR experiences in both 3D and AR modes.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Upgrading to the latest version of Unreal Engine together with the simplified template project allows customers to align and streamline their workflows on the latest version of Unreal Engine with minimal integration effort.&lt;/span&gt;&lt;/p&gt;
&lt;p style="padding-left: 40px;"&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;"The upgrade to UE 5.3 has revolutionized Rotor Studios' production process, particularly through advanced Nanite improvements that extend to landscapes and foliage, enhancing detail and performance, all while being seamlessly integrated with Google's cutting-edge streaming technology. This update not only streamlines our asset workflow but also dramatically enriches the visual fidelity of our projects, setting a new standard for realism in our industry." &lt;/span&gt;&lt;span style="vertical-align: baseline;"&gt;- Peter Shand, Head of Realtime, Rotor Studios&lt;/span&gt;&lt;/p&gt;
&lt;p style="padding-left: 40px;"&gt;&lt;span style="font-style: italic; vertical-align: baseline;"&gt;"KDDI continues to innovate with Google Cloud in creating new customer experiences using Immersive Stream for XR and is excited about the updates and features in Unreal 5.3. Our goal is to solve for the Japan market but aim to scale our solutions to global markets." &lt;/span&gt;&lt;span style="vertical-align: baseline;"&gt;- Katsuhiro Kozuki, General Manager of Advanced Technology Strategy Department, KDDI Corporation&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;If you're ready to create cutting-edge XR experiences, Immersive Stream for XR with Unreal Engine 5.3 is the platform for you. Visit the &lt;/span&gt;&lt;a href="https://cloud.google.com/immersive-stream/xr?hl=en"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Google Cloud website&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; for more information and to start your own XR development journey today.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Thu, 18 Apr 2024 16:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/topics/telecommunications/immersive-stream-for-xr-now-supports-unreal-engine-5-3/</guid><category>Media &amp; Entertainment</category><category>Networking</category><category>Telecommunications</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Upgrading Immersive Stream for XR to Unreal Engine 5.3</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/topics/telecommunications/immersive-stream-for-xr-now-supports-unreal-engine-5-3/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Aria Shahingohar</name><title>Software Engineer, Immersive Stream for XR</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Cati Grasso</name><title>Interaction Designer, Immersive Stream for XR</title><department></department><company></company></author></item><item><title>Google Cloud partners fuel media and entertainment boom: Viewers reap the rewards</title><link>https://cloud.google.com/blog/products/media-entertainment/partners-fueling-powerful-viewer-experiences-with-at-next24/</link><description>&lt;div class="block-paragraph_advanced"&gt;&lt;p&gt;&lt;a href="https://quickplay.com/quickplay-brings-generative-ai-to-programmers-to-optimize-storefront-search-and-discovery-tools/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Quickplay&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; and Google Cloud joining forces to utilize next-generation AI for video streaming discovery is another noteworthy collaboration. The premium entertainment platform’s Curator Assistant, powered by Google Cloud AI, empowers curators to create dynamic storefronts with personalized recommendations. By analyzing various data points and historical performance, Curator Assistant provides intelligence to support more seamless content discovery, ultimately enhancing viewer experience.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Streamlining workflows and saving costs&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Partnerships between Google Cloud and innovative companies are driving operational efficiencies and cost savings across the board. For instance, TelevisaUnivision has been able to gain real-time visibility into its video network by integrating &lt;/span&gt;&lt;a href="https://alvalinks.com/televisaunivision-partners-with-alvalinks-for-optimizing-video-networking-workflows-with-google-cloud/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;AlvaLinks&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;' Cloudrider technology with Google Cloud. As a result, the Spanish-language media company has been able to identify and address issues quickly, streamline workflows, and boost overall efficiency. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.tapeark.com/partners/google-cloud-platform/" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Tape Ark&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; migrated one of the largest tape-based sports broadcast video collections in the world to Google Cloud. This move empowers the broadcaster to unlock the value of its previously idle video content by enabling monetization opportunities. It also eliminates the need for outdated tape infrastructure, reducing costs associated with hardware maintenance, software licenses, and data center space. This project creates a massive historical archive of favorite sports moments for fans to enjoy.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Lastly, Globo, a major media company in Brazil, moved its visual effects (VFX) workflows to Google Cloud to achieve greater agility and meet sustainability goals. This project was a part of its larger initiative to migrate the entire company to the cloud. By using &lt;/span&gt;&lt;a href="https://url.avanan.click/v2/___https://blog.beboptechnology.com/blog/how-globo-uses-bebop-with-google-cloud-and-hp-anywhere___.YXAzOm5lY3RhcjphOmc6MTNkMDAxZDQyODBhOTFhNTM1MDk2NDkwNGI1OTAyZTY6NjpiNjZhOjM1Nzk1YzQ3M2ZlNWNhOGNiNjBmOTBhYzZjZjcxYzBkMDI1OTFhNDM1ZGFjZmY4OGQyZTAzYTNjNmE2NTYwZjA6aDpU" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;BeBop&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; and &lt;/span&gt;&lt;a href="https://url.avanan.click/v2/___https://h20195.www2.hp.com/v2/GetDocument.aspx?docname=4AA8-3571ENW___.YXAzOm5lY3RhcjphOmc6MTNkMDAxZDQyODBhOTFhNTM1MDk2NDkwNGI1OTAyZTY6Njo0YmE3OjhlMTJjNmNkOGRhYzBjM2MxYjFjNTU1NTlhNDIxNDFhYzY1YWFmN2FmMDJjN2MzOWVlM2U5MzU4YWFiMzIyMzc6aDpU" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;HP Anyware&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;, Globo's editors and VFX artists can now work remotely with just a light device and an internet connection. This not only reduces employee reliance on expensive hardware, but also decreases Globo’s carbon footprint. &lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Scaling up for success: Streamers get speed and power&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;For today's streaming services, handling massive content libraries and delivering high-quality video requires a robust and scalable foundation. Adapting to emerging technologies and changing consumer needs is a monumental challenge, but it’s possible to rapidly transform with the right technology partnerships. Case in point, TelevisaUnivision launched its giant streaming platform, ViX, in just nine months leveraging Google Cloud's scalable infrastructure and &lt;/span&gt;&lt;a href="https://url.avanan.click/v2/___https://www.akta.tech/news/building-televisa-univision-vix-in-nine-months-powered-by-akta-video-platform/___.YXAzOm5lY3RhcjphOmc6MTNkMDAxZDQyODBhOTFhNTM1MDk2NDkwNGI1OTAyZTY6Njo4NmQxOjhhNmQxZTA4YjhiYjZiYjc2MGViNWE1ZDI5ZWVlMDMwM2I2MWM1MDYxZjUwNDE0NDNjN2RjNThhMzM4ZWEyMDk6aDpU" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Akta&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt;’s video workflow management solutions.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Stationhead, a popular music streaming app known for its social features, used real-time audio technology from the leading interactive video platform &lt;/span&gt;&lt;a href="https://url.avanan.click/v2/___https://www.streamingmedia.com/PressRelease/Jennifer-Lopez-Nicki-Minaj-and-Olivia-Rodrigo-Have-Launched-Fan-Channels-on-Stationhead-Powered-by-Phenix_55470.aspx___.YXAzOm5lY3RhcjphOmc6MTNkMDAxZDQyODBhOTFhNTM1MDk2NDkwNGI1OTAyZTY6NjpkN2QzOmVkZjExOWFmODNmZGVkMDRmNzEzODFiMmFhZjdlY2NiNzQxNDA5MGNlNGU4MmRlODI0NzY0NGE5MGQwMTE5Mjg6aDpU" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Phe&lt;/span&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;nix&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; to power custom channels for celebrities like Jennifer Lopez and Olivia Rodrigo. These channels let fans connect, listen together, and even join interactive events with their favorite artists. The secret sauce? Phenix runs on Google Cloud infrastructure, allowing Stationhead to deliver these features with ultra-low latency — and gain a competitive edge. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Similarly, Globo partnered with &lt;/span&gt;&lt;a href="https://bitmovin.com/globo-google-cloud" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Bitmovin&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; and Google Cloud to achieve lightning-fast encoding times, reduce costs, and create exceptional video quality, even for demanding formats like 8K. This collaboration ensures a smooth viewing experience for their audience without breaking the bank.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Finally, Amagi'&lt;/span&gt;&lt;span style="vertical-align: baseline;"&gt;s experience reflects the overall growth trend. By combining its cloud-based SaaS technology for broadcast and streaming TV with Google Cloud infrastructure, Amagi fueled a 30% increase in broadcast and FAST channels on the platform. This partnership is empowering broadcasters with advanced tools for content management, diverse ad formats, and insightful analytics — all while benefiting from Google Cloud's scalability. &lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style="vertical-align: baseline;"&gt;Pushing the boundaries of creativity&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;The realm of media and entertainment isn't limited to improving the efficiency of content consumption, and Google Cloud is also fostering innovation at the intersection of art and technology. For example, we recently collaborated with digital VFX service provider &lt;/span&gt;&lt;a href="https://url.avanan.click/v2/___https://www.linkedin.com/feed/update/urn:li:activity:7183834289023709185/___.YXAzOm5lY3RhcjphOmc6MTNkMDAxZDQyODBhOTFhNTM1MDk2NDkwNGI1OTAyZTY6NjpjODE1OmM4NzdlZWZlNGJmZDA5NjM1YWRlZmIwOWQzM2Q4MmJlMjY4MjI4NGM2MWE3OTFiMTNmZTg5ZTBjYjYzYzRhYzE6aDpU" rel="noopener" target="_blank"&gt;&lt;span style="text-decoration: underline; vertical-align: baseline;"&gt;Gunpowder&lt;/span&gt;&lt;/a&gt;&lt;span style="vertical-align: baseline;"&gt; to support Turkish media artist Refik Anadol’s “Dataland” project. Google Cloud's high-performance infrastructure provided the foundation for complex data caching and high-fidelity renderings, crucial elements for Anadol's data-driven art. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;Partnerships like these highlight the exciting potential of technology to empower artists and push the boundaries of creative expression, paving the way for a future where art and technology seamlessly converge. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style="vertical-align: baseline;"&gt;These are just a few examples of how Google Cloud and its partners are transforming the media and entertainment landscape with AI. By working together, we are creating solutions that benefit both content creators and viewers. This collaborative approach is fueling innovation and growth in the industry today, ensuring exciting entertainment experiences for everyone tomorrow.&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Wed, 10 Apr 2024 12:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/media-entertainment/partners-fueling-powerful-viewer-experiences-with-at-next24/</guid><category>AI &amp; Machine Learning</category><category>Google Cloud Next</category><category>Partners</category><category>Media &amp; Entertainment</category><media:content height="540" url="https://storage.googleapis.com/gweb-cloudblog-publish/images/Next24_Blog_blank_2-04.max-600x600.jpg" width="540"></media:content><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Google Cloud partners fuel media and entertainment boom: Viewers reap the rewards</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/media-entertainment/partners-fueling-powerful-viewer-experiences-with-at-next24/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Kip Schauer</name><title>Global Head of Media &amp; Entertainment and Gaming Partnerships, Google Cloud</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Albert Lai</name><title>Global Director, Media &amp; Entertainment, Google Cloud</title><department></department><company></company></author></item><item><title>How Arvato Systems makes 3D picture production easier, faster and cheaper with Google Cloud</title><link>https://cloud.google.com/blog/topics/customers/arvato-systems-makes-picture-production-with-3d-assets/</link><description>&lt;div class="block-paragraph"&gt;&lt;p data-block-key="u6kus"&gt;With the rise of ecommerce and digital business models, the need to create high-quality, photorealistic 3D pictures continues to surge. Today, the way brands photograph and style their products can play a huge role in purchase decisions. Some 87% of consumers say that &lt;a href="https://www.cmocouncil.org/thought-leadership/reports/btob-content-impacts-customer-thinking--buying-decisions" target="_blank"&gt;product pictures are very important when making a purchase decision online&lt;/a&gt; while another 67% say t&lt;a href="https://www.mdgsolutions.com/learn-about-multi-location-marketing/its-all-about-the-images-infographic/" target="_blank"&gt;he quality of product images is very important in their purchasing decision&lt;/a&gt;. In fact, according to recent research, &lt;a href="https://www.mdgsolutions.com/learn-about-multi-location-marketing/its-all-about-the-images-infographic/" target="_blank"&gt;more customers value the quality of a product’s image&lt;/a&gt; than they value product-specific information (63%) or long descriptions (54%).&lt;/p&gt;&lt;p data-block-key="5j0ij"&gt;Imagine viewing your new living room couch from a realistic 3D perspective, complete with rich details such as texture, lighting, and accessories that match your preferences. These 3D assets can be used not only for image production but also potentially for augmented reality (AR) and virtual reality (VR) use cases and scenarios.&lt;/p&gt;&lt;p data-block-key="fl814"&gt;Google Cloud Premier partner &lt;a href="https://www.arvato-systems.com/more/about-arvato-systems" target="_blank"&gt;Arvato Systems&lt;/a&gt; developed &lt;a href="https://www.arvato-systems.com/solutions-technologies/products/imagejet" target="_blank"&gt;imagejet&lt;/a&gt; to deliver 3D picture production. The new innovative cloud-based solution leverages G2 VMs powered by &lt;a href="https://www.nvidia.com/en-us/data-center/l4/" target="_blank"&gt;NVIDIA L4 Tensor Core GPUs&lt;/a&gt; and offers organizations easy entry for creating high-quality mass picture production with photorealistic studio quality.&lt;/p&gt;&lt;h3 data-block-key="3g317"&gt;&lt;b&gt;Challenging the status quo and surpassing traditional boundaries&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="eq3oe"&gt;Due to the online shopping boom in the post-COVID era, there has been an unprecedented demand for first-class product presentation experiences that use studio-quality images. To keep pace, retailers, consumer packaged goods companies, manufacturers, and other small and medium businesses are now extending their go-to-market strategies into online channels and need a way to present their products more professionally.&lt;/p&gt;&lt;p data-block-key="5ombr"&gt;Traditional product photography is quickly reaching its limits; it’s costly, inflexible and less dynamic, and lacks scalability. Taking high-resolution images of products requires a studio setup with expensive equipment and lighting to capture just a few scenes. We offer few options for capturing a broad range of product textures, colors, and materials. This approach does not scale with the rise of product variants and market or trend dynamics that require rapid product adaptations.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_-_Picture_1__Photo_Studio_setup_with_pro.max-1000x1000.png"
        
          alt="1 - Picture 1_ Photo Studio setup with product photography equipment"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="rapgk"&gt;Photo Studio setup with product photography equipment&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p data-block-key="u6kus"&gt;On the other hand, building a powerful 3D rendering platform from scratch requires significant investments in development efforts and infrastructure, such as acquiring powerful NVIDIA graphics processing units (GPUs) and costly third-party software licenses for 3D rendering software.&lt;/p&gt;&lt;p data-block-key="edft7"&gt;Building an image product pipeline takes tremendously high effort. For many organizations, it’s an undertaking that is out of reach, especially given the high costs and technical complexity. Furthermore, image production with in-house builds or traditional product photography-based approaches is much slower, delaying time to market and making it much more difficult for teams to react quickly to market changes.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/2_-_Picture_2__Example_3D_rendering_output.max-1000x1000.png"
        
          alt="2 - Picture 2_ Example 3D rendering output using imagejet"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="rapgk"&gt;Example 3D rendering output using imagejet&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/3_-_Picture_3__Example_3D_rendering_output.max-1000x1000.jpg"
        
          alt="3 - Picture 3_ Example 3D rendering output using imagejet using rich details of texture and illumination"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="rapgk"&gt;Example 3D rendering output using imagejet showing rich details of texture and illumination&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;h3 data-block-key="u6kus"&gt;&lt;b&gt;How imagejet improves the 3D picture production process on Google Cloud&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="63i88"&gt;Arvato Systems developed imagejet to significantly accelerate the 3D picture production process. The goal was to lower the bar of access to 3D production pipelines for customers while also reducing costs and complexity.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/4_-_Illustration_4__imagejet_workflow.max-1000x1000.jpg"
        
          alt="4 - Illustration 4_ imagejet workflow"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="rapgk"&gt;Imagejet workflow&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p data-block-key="u6kus"&gt;Imagejet provides the following key capabilities:&lt;/p&gt;&lt;ul&gt;&lt;li data-block-key="9bror"&gt;Streamlined picture production based on 3D assets without requiring any change of existing photography workflows&lt;/li&gt;&lt;li data-block-key="d2pse"&gt;A collaboration platform to connect teams (customer, 3D agency)&lt;/li&gt;&lt;li data-block-key="aguc1"&gt;Integrated approval and workflow management&lt;/li&gt;&lt;li data-block-key="b7h8l"&gt;Fully scalable production and high throughput of picture generation (on Google Cloud), leveraging key standards like Pixar’s Universal Scene Description (USD) and industry standard Product Information Management Data (PIM)&lt;/li&gt;&lt;li data-block-key="18q4"&gt;Single source publishing — one asset type — across multiple distribution channels (e.g. AR/VR applications)&lt;/li&gt;&lt;/ul&gt;&lt;p data-block-key="7gcev"&gt;For example, imagine you have a customer that requests 4 million pictures for his product catalog. Using imagejet with dynamic workload distribution on Google Cloud NVIDIA L4 GPUs and optimized spot instances), you could deliver the requested assets in just two weeks. The same request would take almost two years to complete with an in-house or on-premises 3D production pipeline.&lt;/p&gt;&lt;h3 data-block-key="890nb"&gt;&lt;b&gt;Partnering with Google Cloud for more innovation&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="4r5g9"&gt;When Arvato Systems started building imagejet, the team knew it needed a cloud partner that could offer a broad range of state-of-the-art technology and strong technology partnerships with key market players. &lt;a href="https://cloud.google.com/nvidia"&gt;Google Cloud’s strategic partnership with NVIDIA&lt;/a&gt; ensures that Google Cloud customers have access to the latest GPUs and solid capacity. In addition, Google Cloud provided other key services to improve performance, such as &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;Google Kubernetes Engine&lt;/a&gt;, &lt;a href="https://cloud.google.com/pubsub"&gt;Pub/Sub&lt;/a&gt;, &lt;a href="https://cloud.google.com/storage"&gt;Cloud Storage&lt;/a&gt;, &lt;a href="https://cloud.google.com/memorystore"&gt;Cloud memorystore for Redis&lt;/a&gt; and &lt;a href="https://cloud.google.com/bigquery"&gt;BigQuery&lt;/a&gt;.&lt;/p&gt;&lt;p data-block-key="8lki8"&gt;Through strong collaboration with the Google Cloud Customer Engineering team, Arvato Systems was able to identify that NVIDIA’s L4 Tensor Core GPUs offer the best performance for its 3D rendering workloads, which require heavy 32-bit (floating point) operations. Using the latest NVIDIA L4 GPUs allowed Arvato Systems to improve imagejet’s 3D rendering performance by 160% and reduce rendering costs by 75% — a significant improvement to the previously tested &lt;a href="https://www.nvidia.com/en-us/data-center/a100/" target="_blank"&gt;NVIDIA A100 Tensor Core GPUs&lt;/a&gt;.&lt;/p&gt;&lt;p data-block-key="2j6jv"&gt;"In our work with Arvato Systems, we have not only pushed technological boundaries but also made a stride toward lowering energy consumption. By using NVIDIA L4 GPUs on Google Cloud, we have significantly improved the energy efficiency of our 3D rendering processes,” said Anna-Maria Martini, Google Cloud’s head of customer engineering for Media and Entertainment. “This is a testament to how advanced technology can not only boost performance but also play a crucial role in reducing the ecological footprint, an aspect increasingly important in the digital world."&lt;/p&gt;&lt;p data-block-key="3cdg7"&gt;Beside an excellent cost-performance ratio and consistent availability of required capacities in the EU region, Arvato Systems also benefits from access to the wider ecosystem of Google Cloud products and services. imagejet takes advantage of the high scalability of the underlying Google Cloud infrastructure to accommodate multiple 3D mass production jobs, reduce technical complexity, and lower labor costs by offering a flexible utilization-based pricing approach to its customers.&lt;/p&gt;&lt;h3 data-block-key="5d5i0"&gt;&lt;b&gt;Delivering high quality 3D picture production in an easier and faster way&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="bm8m"&gt;The development of imagejet marks a key milestone in how businesses can transform and enhance their product presentations, which are now a fundamental part of their business models. With the continuous advancement of cloud technologies and the increasing demand for immersive and more interactive shopping experiences, imagejet offers a new way of working that makes high-quality 3D picture production faster, more innovative, and more cost-effective.&lt;/p&gt;&lt;p data-block-key="4r5hk"&gt;“Thanks to Google Cloud, our solution imagejet will revolutionize the way artists, agencies, marketing departments, gamers, developers and more work,” said Christian Scholz, Arvato Systems’ VP of Cloud and Business Transformation. “We give our customers &lt;i&gt;compelling&lt;/i&gt; tools so they can bring their power together and create virtual worlds.”&lt;/p&gt;&lt;p data-block-key="fp1d4"&gt;&lt;b&gt;For more information, we also recommend the following resources:&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li data-block-key="87en8"&gt;&lt;a href="https://cloud.google.com/blog/products/compute/introducing-g2-vms-with-nvidia-l4-gpus"&gt;Introducing G2 VMs with NVIDIA L4 GPUs&lt;/a&gt;&lt;/li&gt;&lt;li data-block-key="40hu8"&gt;&lt;a href="https://openusd.org/release/index.html" target="_blank"&gt;Universal Scene Description (OpenUSD)&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr/&gt;&lt;p data-block-key="7867b"&gt;&lt;i&gt;&lt;sup&gt;Thanks to Anna-Maria Martini and Christian Scholz for contributing with their insights to this post.&lt;/sup&gt;&lt;/i&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Mon, 05 Feb 2024 08:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/topics/customers/arvato-systems-makes-picture-production-with-3d-assets/</guid><category>Compute</category><category>Media &amp; Entertainment</category><category>Partners</category><category>Customers</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>How Arvato Systems makes 3D picture production easier, faster and cheaper with Google Cloud</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/topics/customers/arvato-systems-makes-picture-production-with-3d-assets/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Götz Moritz Bongartz</name><title>Product Owner of imagejet, Arvato Systems</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Benjamin Storz</name><title>Principal Architect, Google Cloud</title><department></department><company></company></author></item><item><title>Migrating from Cassandra to Bigtable at Latin America’s largest streaming service</title><link>https://cloud.google.com/blog/products/databases/globos-journey-from-cassandra-to-bigtable/</link><description>&lt;div class="block-paragraph"&gt;&lt;p data-block-key="mclhb"&gt;&lt;b&gt;&lt;i&gt;Editor’s note&lt;/i&gt;&lt;/b&gt;&lt;i&gt;: Today we hear from Grupo Globo, the largest media group in Latin America, which operates the Globoplay streaming service. This post outlines their migration from Apache Cassandra to Bigtable and learnings along the way.&lt;/i&gt;&lt;/p&gt;&lt;hr/&gt;&lt;p data-block-key="c3sqn"&gt;Grupo Globo, Latin America’s largest media group, owns and operates &lt;a href="https://globoplay.globo.com/" target="_blank"&gt;Globoplay&lt;/a&gt; — a streaming service where users can access live TV broadcasts in addition to on-demand video and audio content. Since most of our users don’t consume content in one sitting, and switch between multiple devices, the ability for them to resume watching a title where they left off is a key capability for our service.&lt;/p&gt;&lt;p data-block-key="77fko"&gt;Our Continue Watching API (CWAPI) is an application written in Go that processes audio and video watched timestamps, a workload which consists of 85% write and 15% read requests. To handle such write traffic in a performant way, we historically relied on Apache Cassandra, which is known for its high write-throughput and low write latencies.&lt;/p&gt;&lt;p data-block-key="4aovb"&gt;Initially, Cassandra was installed on physical machines in Globo's proprietary data center. Given the straightforward compatibility with their existing on-premises setup, a lift-and-shift approach to &lt;a href="https://cloud.google.com/compute"&gt;Compute Engine&lt;/a&gt; made the most sense at the time. Although the application functioned well in this setup, accommodating variations in user traffic required adding and removing nodes, a time-consuming practice, which ultimately resulted in over-provisioned clusters and drove up our infrastructure costs. Working with Cassandra also meant patching the software regularly, an often overlooked, but significant operational overhead.&lt;/p&gt;&lt;p data-block-key="9u2cg"&gt;We began the process of evaluating possible alternatives to replace Cassandra. We had read about how &lt;a href="https://cloud.google.com/blog/products/databases/youtube-runs-on-bigtable"&gt;Bigtable was being used at YouTube&lt;/a&gt;, and we were encouraged to see that &lt;a href="https://www.youtube.com/watch?v=Hfd3VZOYXNU" target="_blank"&gt;other streaming services&lt;/a&gt; like Spotify, had made the switch from Cassandra to &lt;a href="https://cloud.google.com/bigtable"&gt;Bigtable&lt;/a&gt;, realizing savings of up to 75%. That said, to be sure, we wanted to conduct our own evaluation using our own specific workloads and serving traffic.&lt;/p&gt;&lt;p data-block-key="ed044"&gt;We initially looked at "Cassandra as a Service" solutions on Google Cloud, which offered the convenience of lift-and-shift with no code changes. However, after benchmarking with our own data and running our load tests, we found that Bigtable was the best option for us. It had a lower cost of ownership and additional capabilities, even though it required more migration work than managed Cassandra in the cloud.&lt;/p&gt;&lt;h2 data-block-key="d1bkp"&gt;&lt;b&gt;Why we chose Bigtable&lt;/b&gt;&lt;/h2&gt;&lt;p data-block-key="1j8re"&gt;Bigtable proved to be a strong alternative, mainly due to its characteristics of low latency at high read/write throughput, scalability to large volumes of data, resilience, built-in Google Cloud integrations, and having been validated by Google products with billions of users. Being a managed service, it also simplifies operation compared to self-managed databases. Capabilities such as high availability, data durability and security are guaranteed out-of-the-box.&lt;/p&gt;&lt;p data-block-key="cg3u4"&gt;Furthermore, multi-primary replication between globally distributed regions increases availability and guarantees faster access by automatically routing requests to the nearest region, which also helped us deliver read-your-writes consistency for our use case with ease. Bigtable provides native tools such as usage metrics dashboards and &lt;a href="https://cloud.google.com/bigtable/docs/keyvis-overview"&gt;Key Visualizer&lt;/a&gt;, which helps find points for performance improvements by analyzing the pattern of access to keys. Data in Bigtable can also be queried from &lt;a href="https://cloud.google.com/bigquery"&gt;BigQuery&lt;/a&gt; without having to copy the data to the data warehouse.&lt;/p&gt;&lt;h2 data-block-key="7cdbb"&gt;Implementation, data migration and rollout&lt;/h2&gt;&lt;p data-block-key="ah9rg"&gt;After deciding that Bigtable would be the best alternative to replace Cassandra, the team planned the migration in the following steps.&lt;/p&gt;&lt;h3 data-block-key="39pnj"&gt;&lt;b&gt;Porting Cassandra code to Bigtable&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="109r1"&gt;Bigtable provides a wide variety of client libraries, including &lt;a href="https://pkg.go.dev/cloud.google.com/go/bigtable"&gt;Go&lt;/a&gt;. We focused on the code paths that write to the database first, which would allow us to migrate data between databases. Once writing was finished, we implemented the reading features and converted all Cassandra code to Bigtable without any issues. Tests were created to verify that each feature integrated properly with the rest of the system.&lt;/p&gt;&lt;h3 data-block-key="f06dd"&gt;&lt;b&gt;Enabling duplicate writing in both databases&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="d3nfc"&gt;To ensure that no new data would be lost during the migration, we enabled duplicate writing on both databases. We began by writing 1% of the data to each database, then gradually increased the percentage as we confirmed that there were no issues. This allowed us to validate how the database and application behaved without impacting the user, since Cassandra remained the primary database throughout the transition.&lt;/p&gt;&lt;h3 data-block-key="194im"&gt;&lt;b&gt;Data migration&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="43ilf"&gt;We decided to create a pipeline using &lt;a href="https://cloud.google.com/dataflow"&gt;Dataflow&lt;/a&gt; to perform batch migration. Using Bigtable’s &lt;a href="https://cloud.google.com/dataflow/docs/guides/templates/provided/cassandra-to-bigtable"&gt;Dataflow template&lt;/a&gt; as our starting point, we found the approach easy to implement and very performant.&lt;/p&gt;&lt;p data-block-key="dfd61"&gt;The script read a static file from the Cassandra dump, which was stored in a bucket on &lt;a href="https://cloud.google.com/storage"&gt;Cloud Storage&lt;/a&gt;. Each line of the file represented a line from the Cassandra table. The script then transformed the data for Bigtable and inserted it into the table. At the same time, CWAPI was writing new traffic to Bigtable.&lt;/p&gt;&lt;p data-block-key="7pv4p"&gt;After validating the script in the development environment, we prepared it for execution in production. Due to the large volume of data, we split the dump file into multiple files, each approximately 190 GB in size. This strategy reduced the likelihood of having to reprocess data in the event of an unexpected error during the execution of the Dataflow script.&lt;/p&gt;&lt;h3 data-block-key="fnl9a"&gt;&lt;b&gt;Validating the migration&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="h6jp"&gt;To validate the migration, we created a simple API that was deployed internally. This API exposed two ports, each with an endpoint that was equivalent in terms of parameters and response, but seeking data from its respective database: Cassandra and Bigtable.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_x9uXRcd.max-1000x1000.jpg"
        
          alt="1"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="982f8"&gt;High-level architecture of the migration validation API&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p data-block-key="mclhb"&gt;It is essential to emphasize that this API leveraged the code used in the integration of the two databases, eliminating the need for a new implementation that could differ from the production implementation.&lt;/p&gt;&lt;p data-block-key="1tksk"&gt;This API was later used as a data source for &lt;a href="https://blog.twitter.com/engineering/en_us/a/2015/diffy-testing-services-without-writing-tests" target="_blank"&gt;Diffy&lt;/a&gt;, a tool that was initially built by Twitter and is now open source. We relied on Diffy to validate the migration. It finds potential bugs in your service by using instances of your new code and your old code side-by-side. Diffy behaves like a proxy and multicasts all the requests it receives to each of the running instances. It compares the responses and reports of any regressions that may arise from these comparisons.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/2_4P0G3ks.max-1000x1000.jpg"
        
          alt="2"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="idqkp"&gt;Diffy topology&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p data-block-key="mclhb"&gt;We configured an instance of Diffy so that the two "primary" and "candidate" applications would be configured with the same validation application described above. The only difference is that the primary application uses the port for Cassandra, while the candidate application uses the port for Bigtable. This way, we can compare whether the migration was successful by comparing the same query in both databases.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/3_vWmvQ7q.max-1000x1000.png"
        
          alt="3"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;p data-block-key="8aqa2"&gt;Diffy with validation API&lt;/p&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p data-block-key="mclhb"&gt;With Cassandra data migrated to Bigtable and CWAPI saving 100% of the data in both databases, we needed to ensure the migration was done successfully to begin the Bigtable integration rollout.&lt;/p&gt;&lt;p data-block-key="3055a"&gt;The strategy was to duplicate a small percentage of GET requests to Diffy, which would consequently call the created validation application and generate a report with the differences. In order not to burden the Cassandra base that was serving requests in production, we decided to use a sampling of 1% of requests.&lt;/p&gt;&lt;p data-block-key="7noh4"&gt;In the CWAPI architecture there is an instance of Nginx that acts as a reverse proxy, and we take advantage of this instance to duplicate GET requests. Two modules were essential for this to be achieved: split_clients and mirror. The split_clients module was used to control the percentage of requests that would be duplicated and the mirror used to duplicate the request.&lt;/p&gt;&lt;h3 data-block-key="3v4gn"&gt;&lt;b&gt;Rollout&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="51gce"&gt;After validating that the migration was correct based on sampling and correcting some inconsistencies, the team was very confident in starting the rollout, as we were already using Bigtable extensively with 100% of the writing without any problems and with consistent data.&lt;/p&gt;&lt;p data-block-key="73ags"&gt;We then used the same deployment strategy for the serving path, starting with 1% and gradually increasing until we reached 100% of GET requests being served by Bigtable.&lt;/p&gt;&lt;p data-block-key="d76m3"&gt;After a short "quarantine" period of approximately 15 days, we decommissioned our Cassandra servers.&lt;/p&gt;&lt;h2 data-block-key="ep3oo"&gt;Conclusion&lt;/h2&gt;&lt;p data-block-key="62pk2"&gt;The strategy used for this migration was of great importance in order to ensure there was no impact on the user. Writing into both databases allowed us to continue the integration and easily correct any problems. Diffy, together with the migration validation API, was essential to ensure that the migration was executed successfully and that user data was intact. The rollout of read requests after validation was done very smoothly as Bigtable was already under high server traffic and the data was validated. The well-abstracted CWAPI code, which is covered with unit, integration, and end-to-end tests, simplified implementation and gave us the necessary confidence that everything was correct.&lt;/p&gt;&lt;p data-block-key="6mhlf"&gt;Bigtable has proven to be a fantastic alternative to Cassandra, bringing forth several notable advantages. High availability and performance combined with autoscaling, allow us to operate reliably and at a lower cost compared to the Cassandra cluster. We have already achieved savings of approximately 60% without sacrificing performance or features. Additionally, we also migrated our Redis Stream-based queue solution to Pub/Sub. This transition not only enhanced the background processing capability of the application, boosting scalability and performance, but also contributed to more efficient maintenance and a notable reduction in operational costs.&lt;/p&gt;&lt;p data-block-key="1s4cm"&gt;Google Cloud has been a strategic choice in helping us deliver robust technological solutions and optimizing our financial resources. By migrating to Bigtable, we have decreased maintenance needs, guaranteed database scalability, acquired better observability tools and leveraged strong integrations with other Google Cloud products. We are excited to continue this partnership in simplifying database management in order to meet our business’ ever-evolving demands.&lt;/p&gt;&lt;h2 data-block-key="3rsbe"&gt;Learn more&lt;/h2&gt;&lt;p data-block-key="6hea6"&gt;Read more on how others are reducing cloud spend while improving service performance, scalability and reliability by moving to Bigtable:&lt;/p&gt;&lt;ul&gt;&lt;li data-block-key="5tag6"&gt;&lt;a href="https://cloud.google.com/blog/products/databases/bigtable-helps-wunderkind-scale-retail-and-media-customers"&gt;Wunderkind migrates from DynamoDB to Bigtable for improved performance stability&lt;/a&gt;&lt;/li&gt;&lt;li data-block-key="cj52n"&gt;&lt;a href="https://cloud.google.com/blog/products/databases/airship-chooses-bigtable-to-empower-mobile-app-developers"&gt;Airship moves from Cassandra and HBase to Bigtable to reduce management overhead and improve performance reliability&lt;/a&gt;&lt;/li&gt;&lt;li data-block-key="1iri9"&gt;&lt;a href="https://youtu.be/Pq1SSYnzBKQ?t=218" target="_blank"&gt;Reltio modernizes Cassandra workloads with Bigtable&lt;/a&gt;&lt;/li&gt;&lt;li data-block-key="8g986"&gt;&lt;a href="https://cloud.google.com/blog/products/databases/how-box-migrated-from-hbase-to-cloud-bigtable"&gt;How Box migrated from HBase to Bigtable&lt;/a&gt;&lt;/li&gt;&lt;li data-block-key="8bcee"&gt;&lt;a href="https://cloud.google.com/customers/choreograph"&gt;Choreograph scales advertising solutions by moving from Couchbase to Bigtable&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p data-block-key="2n92i"&gt;Get started with a &lt;a href="https://console.cloud.google.com/freetrial?redirectPath=bigtable"&gt;Bigtable free trial&lt;/a&gt; today.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-related_article_tout"&gt;





&lt;div class="uni-related-article-tout h-c-page"&gt;
  &lt;section class="h-c-grid"&gt;
    &lt;a href="https://cloud.google.com/blog/products/databases/youtube-runs-on-bigtable/"
       data-analytics='{
                       "event": "page interaction",
                       "category": "article lead",
                       "action": "related article - inline",
                       "label": "article: {slug}"
                     }'
       class="uni-related-article-tout__wrapper h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6
        h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3 uni-click-tracker"&gt;
      &lt;div class="uni-related-article-tout__inner-wrapper"&gt;
        &lt;p class="uni-related-article-tout__eyebrow h-c-eyebrow"&gt;Related Article&lt;/p&gt;

        &lt;div class="uni-related-article-tout__content-wrapper"&gt;
          &lt;div class="uni-related-article-tout__image-wrapper"&gt;
            &lt;div class="uni-related-article-tout__image" style="background-image: url('')"&gt;&lt;/div&gt;
          &lt;/div&gt;
          &lt;div class="uni-related-article-tout__content"&gt;
            &lt;h4 class="uni-related-article-tout__header h-has-bottom-margin"&gt;How YouTube uses Bigtable to power one of the world’s largest streaming services&lt;/h4&gt;
            &lt;p class="uni-related-article-tout__body"&gt;YouTube uses Bigtable to store user activity for personalization, record metrics, and power reporting dashboards and analytics to name a ...&lt;/p&gt;
            &lt;div class="cta module-cta h-c-copy  uni-related-article-tout__cta muted"&gt;
              &lt;span class="nowrap"&gt;Read Article
                &lt;svg class="icon h-c-icon" role="presentation"&gt;
                  &lt;use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#mi-arrow-forward"&gt;&lt;/use&gt;
                &lt;/svg&gt;
              &lt;/span&gt;
            &lt;/div&gt;
          &lt;/div&gt;
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;/section&gt;
&lt;/div&gt;

&lt;/div&gt;</description><pubDate>Tue, 19 Dec 2023 17:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/databases/globos-journey-from-cassandra-to-bigtable/</guid><category>Media &amp; Entertainment</category><category>Databases</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Migrating from Cassandra to Bigtable at Latin America’s largest streaming service</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/databases/globos-journey-from-cassandra-to-bigtable/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Michel Henrique Aquino Santos</name><title>Software Engineer, Globo</title><department></department><company></company></author></item><item><title>Live streaming with Media CDN and Google Cloud Load Balancer</title><link>https://cloud.google.com/blog/products/networking/live-streaming-with-media-cdn-and-google-cloud-load-balancer/</link><description>&lt;div class="block-paragraph"&gt;&lt;p data-block-key="cfix0"&gt;Live-streaming applications require that streaming be uninterrupted and have minimal delays in order to deliver a quality experience to end users’ devices. Delays in rendering live streams can lead to poor video quality and video buffering, negatively impacting viewership. To ensure a high live-streaming quality, you need a reliable content delivery network (CDN) infrastructure.&lt;/p&gt;&lt;p data-block-key="9ktm3"&gt;&lt;a href="https://cloud.google.com/media-cdn/docs/overview"&gt;Media CDN&lt;/a&gt; is a content delivery network (CDN) platform designed for delivering streaming media with low latency across the globe. Notably, &lt;a href="https://cloud.google.com/media-cdn/docs/overview"&gt;Media CDN&lt;/a&gt; uses YouTube&amp;#x27;s infrastructure to bring video streams (both on-demand video and live) and large file downloads closer to users for fast and reliable delivery. As a CDN, it delivers content to users based on their location from a geographically distributed network of servers, helping to improve performance and reduce latency for users who are located far from the origin server.&lt;/p&gt;&lt;p data-block-key="4afh5"&gt;In this blog, we look at how live-streaming providers can utilize Media CDN infrastructure to better serve video content, whether the live-streaming application is running within Google Cloud, on-premises, or in a different cloud provider altogether. Media CDN, when integrated with &lt;a href="https://cloud.google.com/load-balancing"&gt;Google Cloud External Application Load Balancer&lt;/a&gt; as origin, can be utilized to render streams irrespective of the location of the live-streaming infrastructure. Further, it’s possible to configure Media CDN so that live streams can withstand most kinds of interruptions or outages, to ensure a quality viewing experience. Read on to learn more.&lt;/p&gt;&lt;h3 data-block-key="bb3i8"&gt;&lt;b&gt;Live streaming&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="6l1t5"&gt;Live streaming is the process of streaming video or audio content in real time to broadcasters and playback devices. Typically live-streaming applications, involve the following components:&lt;/p&gt;&lt;ul&gt;&lt;li data-block-key="f824o"&gt;Encoder: Compresses the video to multiple resolutions/bitrates and sends the stream to a packager.&lt;/li&gt;&lt;li data-block-key="1g66d"&gt;Packager and origination service: Transfers the transcoded content to different media formats and stores video segments to be rendered via HTTP endpoints.&lt;/li&gt;&lt;li data-block-key="aueaa"&gt;CDN: Streams the video segments from the origination service to playback devices across the globe with minimal latency.&lt;/li&gt;&lt;/ul&gt;&lt;h3 data-block-key="6v7jf"&gt;&lt;b&gt;Media CDN&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="40csd"&gt;At a high level, &lt;a href="https://cloud.google.com/media-cdn/docs/configuration"&gt;Media CDN&lt;/a&gt; contains two important components:&lt;/p&gt;&lt;p data-block-key="6uip2"&gt;&lt;b&gt;Edge cache service&lt;/b&gt;: Provides a public endpoint and enables route configurations to route the traffic to specific origin.&lt;/p&gt;&lt;p data-block-key="3q6sa"&gt;&lt;b&gt;Edge cache origin&lt;/b&gt;: Configure a Cloud Storage bucket or a Google External Application Load Balancer as an origin.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/mediacdn-1.max-1000x1000.jpg"
        
          alt="mediacdn-1"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p data-block-key="cfix0"&gt;The above figure depicts an architecture where Media CDN can serve a live stream origination service running in Google Cloud, on-prem or on an external cloud infrastructure, by integrating with &lt;a href="https://cloud.google.com/load-balancing/docs/https/setting-up-https"&gt;Application Load Balancer&lt;/a&gt;. Application Load Balancer enables connectivity to multiple backend services and provides advanced path- and host-based routing to connect to these backend services. This allows live stream providers to use Media CDN to cache their streams closer to the end users viewing the live channels.&lt;/p&gt;&lt;p data-block-key="ckh5h"&gt;The different types of backend services provided by Load Balancers to facilitate connectivity across infrastructure are:&lt;/p&gt;&lt;ul&gt;&lt;li data-block-key="45ug8"&gt;&lt;b&gt;Internet/Hybrid NEG Backends&lt;/b&gt;: Connect to live-streaming origination service running in a different cloud provider or on-premises.&lt;/li&gt;&lt;li data-block-key="dt3dk"&gt;&lt;b&gt;Managed Instance Groups&lt;/b&gt;: Connect to live-streaming origination service running in &lt;a href="https://cloud.google.com/compute"&gt;Compute Engine&lt;/a&gt; across multiple regions.&lt;/li&gt;&lt;li data-block-key="f2t6m"&gt;&lt;b&gt;Zonal Network Endpoint Groups&lt;/b&gt;: Connect to live-streaming origination service running in GKE.&lt;/li&gt;&lt;/ul&gt;&lt;h3 data-block-key="afgdf"&gt;&lt;b&gt;Disaster recovery (Primary/Failover Origins)&lt;/b&gt;&lt;/h3&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/mediacdn-2.max-1000x1000.jpg"
        
          alt="mediacdn-2"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p data-block-key="cfix0"&gt;Since any disruption to live stream traffic can affect viewership, it is essential to plan for disaster recovery to protect against zonal or regional failures. Media CDN provides primary/secondary &lt;a href="https://cloud.google.com/media-cdn/docs/origins#failure-behavior"&gt;origin failover&lt;/a&gt; to facilitate disaster recovery.&lt;/p&gt;&lt;p data-block-key="lnnu"&gt;The above figure depicts Media CDN with an Application Load Balancer origin providing failover across regions. This is achieved by creating two “EdgeCacheOrigin” hosts pointing to the same Application Load Balancer with different “host header” values. Every EdgeCacheOrigin is configured to set the host header to a specific value. The Application Load Balancer performs host-based routing to route the live stream traffic requests based on the host header value.&lt;/p&gt;&lt;p data-block-key="6hpec"&gt;When a playback device requests the stream from Media CDN, it invokes the Application Load Balancer by setting the host header to the primary origin value. The load balancer looks at the host header and forwards the traffic to the primary live stream origination service. When the primary live stream provider fails, the failover origin rewrites the host header to the failover origin value and sends the request to Application Load Balancer. The load balancer matches the host and routes the request to a secondary live stream origination service in a different zone or region.&lt;/p&gt;&lt;p data-block-key="5f9rs"&gt;The below snippet depicts the URL host-rewrite configuration in the EdgeCacheOrigin:&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-code"&gt;&lt;dl&gt;
    &lt;dt&gt;code_block&lt;/dt&gt;
    &lt;dd&gt;&amp;lt;ListValue: [StructValue([(&amp;#x27;code&amp;#x27;, &amp;#x27;name: FAILOVER_ORIGIN\r\noriginAddress: &amp;quot;FAILOVER_ORIGIN_HOST&amp;quot;\r\noriginOverrideAction:\r\n  urlRewrite:\r\n    hostRewrite: &amp;quot;FAILOVER_ORIGIN_HOST&amp;quot;&amp;#x27;), (&amp;#x27;language&amp;#x27;, &amp;#x27;&amp;#x27;), (&amp;#x27;caption&amp;#x27;, &amp;lt;wagtail.rich_text.RichText object at 0x7f66bd2c4790&amp;gt;)])]&amp;gt;&lt;/dd&gt;
&lt;/dl&gt;&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;h3 data-block-key="cfix0"&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;/h3&gt;&lt;p data-block-key="ahc62"&gt;Media CDN is an important part of the live streaming ecosystem, helping to improve performance, reduce latency, and ensure quality for live streams. In this post, we looked at how live stream applications can utilize Google Media CDN across multiple environments and infrastructure. To learn more about Media CDN, please see:&lt;/p&gt;&lt;ul&gt;&lt;li data-block-key="a56hn"&gt;&lt;a href="https://cloud.google.com/blog/products/networking/deploy-streaming-service-with-media-cdn"&gt;Deploy Streaming Service with Media CDN&lt;/a&gt;&lt;/li&gt;&lt;li data-block-key="cfc7d"&gt;&lt;a href="https://www.youtube.com/watch?v=GF90l7uk1qE" target="_blank"&gt;Media CDN Overview&lt;/a&gt;&lt;/li&gt;&lt;li data-block-key="4o5oe"&gt;&lt;a href="https://cloud.google.com/media-cdn/docs/origins#load-balancer-origins"&gt;Media CDN with Application Load Balancer&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;</description><pubDate>Fri, 15 Dec 2023 17:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/networking/live-streaming-with-media-cdn-and-google-cloud-load-balancer/</guid><category>Media &amp; Entertainment</category><category>Developers &amp; Practitioners</category><category>Hybrid &amp; Multicloud</category><category>Networking</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Live streaming with Media CDN and Google Cloud Load Balancer</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/networking/live-streaming-with-media-cdn-and-google-cloud-load-balancer/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Kishore Jagannath</name><title>Cloud Infrastructure Engineer</title><department></department><company></company></author></item><item><title>The Content Conundrum: How poor personalization and search experiences impact streaming platforms and their audiences</title><link>https://cloud.google.com/blog/products/media-entertainment/google-cloud-survey-finds-personalization-key-success-for-streaming-platforms/</link><description>&lt;div class="block-paragraph"&gt;&lt;p data-block-key="vjdj5"&gt;In an unexpected plot twist, streaming platforms are currently facing their own cliffhanger as an unprecedented period of slowing subscriber growth. According to &lt;a href="https://variety.com/vip/q1-2023-streaming-earnings-season-crunch-time-1235619446/" target="_blank"&gt;Macquarie Research&lt;/a&gt;, net subscription adds dropped more than 80% YoY in Q1 2022 across Netflix, Disney+, Hulu, ESPN+, HBO Max/Discovery+, Paramount+, Peacock, and AMC+C. With endless content now available across a multitude of platforms, creating unique viewer experiences worth paying for has become mission critical for all streaming platform. To examine the true cost of poor personalization and search experiences, Google Cloud commissioned a Harris Poll to survey more than 2,200 consumers from countries across the world.&lt;/p&gt;&lt;p data-block-key="9349j"&gt;&lt;b&gt;Time Theft: Poor Technology Costs Viewers Precious Time&lt;/b&gt;&lt;/p&gt;&lt;p data-block-key="blvtl"&gt;Viewers can lose up to &lt;i&gt;three hours a week&lt;/i&gt; searching for something to watch, spending an average of 24-minutes per session searching for content, according to the survey. With access to more choice, flexibility, and power over their consumption habits than ever before, audience patience is running razor thin. Google Cloud’s findings suggest almost half of respondents (48%) have canceled a service if they couldn’t find something to watch, costing streaming platforms crucial subscription revenue.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_The_Content_Conundrum.max-1000x1000.png"
        
          alt="1 The Content Conundrum"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p data-block-key="vjdj5"&gt;With a slim margin for error, consumers agree that easily finding something to watch is just as important as affordability when it comes to renewing their subscriptions. Last year we saw a record high with 599 original series produced, according to John Landgraff the chairman of FX Content and FX Productions, and with this amount of choice comes the challenge of finding the right content to provide to the right audience. Unsurprisingly viewers are much more likely to jump from platform to platform, hoping to find a show or movie that interests them. But when they do find that content, they’re more likely to hit the subscribe button. According to our findings, 79% have kept a subscription after discovering new content and 85% of upgrades to pay for a subscription were influenced by the ability to find content. For streaming platforms, it pays to help consumers find the content that best suits their interests. And as profitability remains distant for most major streaming providers, subscriber retention, experience, and monetization has become essential.&lt;/p&gt;&lt;p data-block-key="as8gc"&gt;Streaming companies face a challenge when it comes to understanding their users – both their past preferences and future intentions. Let’s take a look at the user journey:&lt;/p&gt;&lt;ul&gt;&lt;li data-block-key="38qrb"&gt;41% of viewers don’t have a specific program in mind when they turn on their TV. Instead, they expect to browse and discover something to watch.&lt;/li&gt;&lt;li data-block-key="5qeaj"&gt;At the same time, a whopping 81% of viewers expect streaming services to provide highly personalized experiences.&lt;/li&gt;&lt;/ul&gt;&lt;p data-block-key="do3kp"&gt;Content discovery plays a critical role in keeping subscribers satisfied, particularly searching for specific titles or content starring certain actors. 54% of consumers directly search for a title before browsing through suggested content. Unfortunately, almost half of viewers find search does not give them their expected results. This poor user experience results in 31% exiting the application, 37% switching to another streaming service, or 31% switching to another activity entirely.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/2_The_Content_Conundrum.max-1000x1000.png"
        
          alt="2 The Content Conundrum"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p data-block-key="vjdj5"&gt;&lt;b&gt;Business Imperative: Paths to Profitability&lt;/b&gt;&lt;/p&gt;&lt;p data-block-key="fk4ch"&gt;When it comes to content discovery in streaming platforms, recommendations and search capabilities are central to user satisfaction. But now, generative AI is powering innovative solutions across the industry. From personalizing experiences and mitigating churn, to sustaining and growing subscription and advertising revenues, all the way to streamlining operational and content creation costs; gen AI represents a paradigm shift.&lt;/p&gt;&lt;p data-block-key="26ift"&gt;With Google Cloud’s gen AI solutions &lt;a href="https://cloud.google.com/vertex-ai?hl=en"&gt;including Vertex AI&lt;/a&gt;, we’re enhancing audience experiences by creating highly personalized and relevant content recommendations so viewers spend less time choosing and more time watching.&lt;/p&gt;&lt;p data-block-key="87pp2"&gt;With so many options, consumers often experience subscription fatigue making them unwilling to continue to subscribe and pay for more platforms. As a result, we’re seeing a rise in advertising video on demand (AVOD) and free, ad-supported television (FAST).These models offer consumers a cost effective way to view content, while allowing content providers to reach relevant audiences across the platforms that viewers value most.&lt;/p&gt;&lt;p data-block-key="dvf7c"&gt;As providers begin introducing new advertising tiers, 65% of viewers revealed that not only do they prefer personalized advertising, but they’ve come to expect it. Personalized recommendations can take into consideration specific elements like content titles and scenes, leading to a more contextual and personalized advertising experience for the individual consumer. Additionally, 64% of viewers are more likely to keep watching the service if ads feel more relevant to them. As the industry evolves and tries new business models, optimizing advertising will be ever more critical than ever for media companies.&lt;/p&gt;&lt;p data-block-key="vi58"&gt;&lt;b&gt;Redefining the Streaming Wars&lt;/b&gt;&lt;/p&gt;&lt;p data-block-key="f6rea"&gt;&lt;a href="https://www.globenewswire.com/en/news-release/2023/03/01/2617958/0/en/Video-Streaming-Market-Size-2022-2029-Worth-USD-1690-35-Billion-by-2029-Exhibiting-a-CAGR-of-19-9.html#:~:text=video%20streaming%20market%3F-,Video%20streaming%20market%20size%20was%20USD%20372.07%20billion%20in%202021,USD%201690.35%20billion%20by%202029." target="_blank"&gt;Fortune Business Insights&lt;/a&gt; estimates the video streaming market will grow to nearly $1.7 trillion USD by 2029. While the streaming wars are only just beginning, how it’s fought is changing. With so much on the line, creating hyper-personalized viewing and advertising experiences is essential to connecting with users, and cementing a platform’s long term success. Entertainment companies need not go it alone.&lt;/p&gt;&lt;p data-block-key="9uc38"&gt;Google Cloud is here to help media &amp;amp; entertainment companies, whether they’re video streaming platforms, audio streamers, broadcasters or news publishers. We’re committed to leveraging AI to find practical solutions to solve their most pressing business challenges. Since launching access to our large language models (LLMs) through Vertex AI earlier this year, companies have been streamlining many of the tools and processes they use today, and unlocking new understandings and insights across their teams.&lt;/p&gt;&lt;p data-block-key="3mi1g"&gt;To learn more about Google Cloud’s solutions for media &amp;amp; entertainment companies visit our website &lt;a href="https://cloud.google.com/solutions/media-entertainment"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/div&gt;</description><pubDate>Wed, 15 Nov 2023 13:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/media-entertainment/google-cloud-survey-finds-personalization-key-success-for-streaming-platforms/</guid><category>AI &amp; Machine Learning</category><category>Media &amp; Entertainment</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>The Content Conundrum: How poor personalization and search experiences impact streaming platforms and their audiences</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/media-entertainment/google-cloud-survey-finds-personalization-key-success-for-streaming-platforms/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Anil Jain</name><title>Managing Director, Global Strategic Industries, Google Cloud</title><department></department><company></company></author></item><item><title>Providing scalable, reliable video distribution with Google Kubernetes Engine at AbemaTV</title><link>https://cloud.google.com/blog/products/application-modernization/abema-improves-the-viewing-experience-for-global-sporting-events/</link><description>&lt;div class="block-paragraph"&gt;&lt;p&gt;AbemaTV (ABEMA) is a free video streaming service with approximately 25 channels in a variety of genres broadcast 24 hours a day, 365 days a year. Since the start of our operations in 2015, ABEMA has been using Google Cloud as a platform for peripheral services such as video distribution APIs, data analysis, recommendation systems, and ad distribution. As ABEMA's services continue to grow, we needed a platform that could handle higher traffic. This is when we decided to adopt &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;Google Kubernetes Engine (GKE)&lt;/a&gt;, a managed service that could achieve a high-speed development cycle even on a large scale. &lt;/p&gt;&lt;p&gt;At the time, there weren’t many public use cases of container orchestration, but development using containers was much faster than a normal VM-based development. And as our service continued to expand and network complexities grow, we found that choosing GKE was a good decision. Anthos Service Mesh was also good in helping reduce network complexity.  &lt;/p&gt;&lt;p&gt;ABEMA was about to broadcast one of the largest global sporting events, and although the basic existing architecture was fine, we wanted to be prepared for a sudden surge in traffic. We expected to see an unprecedented amount of traffic during the event and needed to make sure we resolved any technical debt and implemented load countermeasures beforehand to make full use of the features of Google Cloud. &lt;/p&gt;&lt;p&gt;For example, with Bigtable, we had been using a single-zone configuration to focus on data consistency, but changed to a multi-zone configuration to maintain consistency using app profiles.&lt;/p&gt;&lt;p&gt;Originally, we had a self-hosted Redis Cluster, but we moved to Memorystore where we reconfigured and divided it into instances per microservices as a countermeasure against latency under high load. MongoDB was similarly divided into instances per microservices. And until then, we had been operating in the Taiwan region, but we also took this opportunity to relocate to the Tokyo region in consideration of latency.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_y9w5e86.max-1000x1000.png"
        
          alt="1.png"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;h3&gt;Implementing thorough advance measures&lt;/h3&gt;&lt;p&gt;The traffic on the day of the global sporting event was the highest ever for ABEMA, but we were able to provide a stable broadcast of all matches without any major problems. In addition to the aforementioned system upgrades, Google Cloud's PSO team helped us take proactive measures that assured our success.&lt;/p&gt;&lt;p&gt;The first thing we did to manage the load for the day was to estimate the traffic and identify what we called the critical user journey — the 'no stoppages'. We analyzed the access logs from a previously broadcast large-scale martial arts event and used that as a reference to estimate the number of requests we would receive. This was only a rough estimate for reference when creating a test scenario. The full workflow included testing with a sufficiently large load based on the traffic obtained and taking measures to prevent the critical user journey from falling.&lt;/p&gt;&lt;p&gt;In creating the test scenario, we modeled the events that could occur during the game and the users’ behavior in response to them. This was to simulate what kind of phenomena would occur on the system side in response to accesses that fluctuated in real time, and to identify problems. With these test results, we built an infrastructure capacity that was 6 times the normal size.&lt;/p&gt;&lt;p&gt;In an actual broadcast, the traffic fluctuates from moment to moment as the match unfolds. For example, we created scenarios based on user behavior in various situations, such as immediately after the start of a match, at halftime, or when a goal is scored. Based on that, we thought about load measures for critical user journeys and measures against traffic spikes, etc. &lt;/p&gt;&lt;p&gt;The PSO team advised us on capacity planning, reviewed load tests, and proposed failure tests, greatly improving the reliability of our load measures. With only an in-house team, various biases tend to occur, such as missing processes and incorrect prioritization, so it was very helpful to receive candid advice from the PSO team. We asked them to check from the perspective of an expert who knows the content of the product — the internal components of Google Cloud — starting with whether we had made any mistakes in our approach to countermeasures. &lt;/p&gt;&lt;p&gt;For the load test, we used k6, a load-test tool that runs on Kubernetes, and simulated the sequence calls of various APIs leading to the critical user journey based on the test scenario. We repeated the cycle of gradually increasing the number of clients and implementing countermeasures when problems occurred, until we finally achieved the number of simultaneous viewers that was set as the business goal. Specifically, we took measures such as scaling out the database, adding circuit breakers, increasing the number of Kubernetes nodes, and adding the necessary resources. At the same time, two times the number of Weekly Active Users was used for the simulation. &lt;/p&gt;&lt;p&gt;In the failure test, we confirmed the scope of impact and how the behavior of the entire system would change if a component with a particularly high risk was stopped or if access to an external service was cut off. &lt;/p&gt;&lt;p&gt;In principle, it is impossible to intentionally break the cloud, so there was the problem of how to simulate a failure situation. The PSO team advised us on  how to set up failure tests and how to intentionally create failure conditions, among other things. &lt;/p&gt;&lt;p&gt;One of the improvement items for load countermeasures they recommended was to introduce Cloud Trace, a distributed tracing system, which proved to be very useful for analyzing problems that occurred during load and failure tests.&lt;/p&gt;&lt;p&gt;With microservices, it can be difficult to identify where a problem is occurring. Cloud Trace helps save a lot of time in identifying the problem, as we were able to keep track of detailed telemetry. Furthermore, some of the issues we uncovered during testing were of a nature that we wouldn't have found without Cloud Trace. As failures tend to occur in a chain reaction, the distributed tracing mechanism was really useful.&lt;/p&gt;&lt;h3&gt;Improving the platform for load tests and failure tests&lt;/h3&gt;&lt;p&gt;As a result of these preliminary measures, we were able to deliver stable broadcasts from the tournament’s start to finish without any major problems. There were times when requests exceeded the expected maximum number of simultaneous connections. Despite this, thanks to proper planning and utilization of Google Cloud, we served full functionality for all matches.&lt;/p&gt;&lt;p&gt;For the number of concurrent connections, we initially set a goal to ensure full functionality for requests up to half of the target maximum connections, and beyond that, to deliver critical functionality up to the maximum number of target connections. As a result, the system was more stable than expected, and we were able to continue providing all functionality, even though at peak times, the number of simultaneous connections was close to the maximum. We believe that this is the result of eliminating various bottlenecks in advance with the cooperation of the PSO team.&lt;/p&gt;&lt;p&gt;Based on this success, we would like to improve the platform so that we can perform load tests and failure tests continuously at the same level. Complex testing of loosely coupled microservices is extremely time-consuming and imposes a high cognitive load. Building a platform where it could be done continuously would save a lot of time.&lt;/p&gt;&lt;/div&gt;</description><pubDate>Mon, 18 Sep 2023 16:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/application-modernization/abema-improves-the-viewing-experience-for-global-sporting-events/</guid><category>Media &amp; Entertainment</category><category>Application Modernization</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Providing scalable, reliable video distribution with Google Kubernetes Engine at AbemaTV</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/application-modernization/abema-improves-the-viewing-experience-for-global-sporting-events/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Mr. Junpei Tsuji</name><title>Engineer, Development Division, AbemaTV Inc</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Mr. Yoshikazu Umino</name><title>SRE, Development Division, AbemaTV Inc</title><department></department><company></company></author></item><item><title>Making social robot conversations more natural with Speech-to-Text</title><link>https://cloud.google.com/blog/products/application-modernization/driving-more-natural-conversations-with-speech-recognition/</link><description>&lt;div class="block-paragraph"&gt;&lt;p&gt;MIXI, Inc. (MIXI) is a social networking organization that provides a diverse range of services for friends and family to enjoy together, such as the social-media platform mixi, a mobile game called Monster Strike, and a family photo and video sharing service known as FamilyAlbum. One of our current projects is Romi, a social robot launched in April 2021 that uses &lt;a href="https://cloud.google.com/speech-to-text"&gt;Speech-to-Text&lt;/a&gt; by Google Cloud as its speech recognition engine.&lt;/p&gt;&lt;p&gt;Since the late 2010s, the social robot market has been booming, with some models becoming increasingly affordable for consumers, from robotic tutors that promote social and cognitive development for children, to companion robots for elderly care. But with Romi, there is a marked difference in the quality of dialogue that makes Romi distinct from most social robots. &lt;/p&gt;&lt;p&gt;The biggest feature of Romi is that the AI developed internally by MIXI can generate natural exchange of communication. The size of a hand-held device, Romi can be placed anywhere in a room and has a screen to demonstrate different facial expressions. It responds to conversation within context. Until now, AI has been used to interpret the intentions behind user speech, but Romi is an AI-powered robot that takes it a step further, generating spoken conversations. After all, Romi was created to offer heartwarming communication to those who are looking for it. This form of speech recognition did not exist before Romi was released. We hope users will enjoy conversing with it, including the occasional unexpected response.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1_MIXI.max-1000x1000.jpg"
        
          alt="1 MIXI.jpg"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p&gt;The speech recognition part was one of the most critical aspects of Romi. Most of the infrastructure that makes up Romi uses a main public cloud, which was used for other services then. As for speech recognition, we decided to try out the Speech-to-Text tool by Google Cloud, which was praised for its overwhelmingly high accuracy, and the prototype’s results were very positive. Even though we tried other companies' services before making the final decision, our conclusion about Speech-to-Text remains the same. &lt;/p&gt;&lt;p&gt;The accuracy and responsiveness of Speech-to-Text made the tool an effective one for a social robot like Romi. Google Cloud also provided a sense of security with its high reliability that has been demonstrated in enabling Romi’s workloads, and will be able to support continuous development of Romi’s services for the long run.&lt;/p&gt;&lt;p&gt;With the rapid development of speech recognition technology, MIXI decided to re-examine the speech recognition engine for Romi in June 2022, about a year after its release. We eventually decided to continue its use of Speech-to-Text. We reviewed about 10 companies' Japanese-compatible speech recognition engines, and found that Speech-to-Text offered the best results. In addition, Speech-to-Text has &lt;a href="https://cloud.google.com/speech-to-text/docs/transcription-model"&gt;several speech recognition transcription models&lt;/a&gt;, but we found that the latest short model, which specializes in short utterances, is more suitable for Romi than the default model.&lt;/p&gt;&lt;p&gt;The cost-savings that Speech-to-Text delivers is also impressive. The billing unit was changed from 15 seconds increments rounded up, to one second in November, and huge cost reductions could be expected with Romi. This is important to us because Romi does not have trigger phrases, such as “OK Google,” so as to achieve more natural conversations. As a result, it can recognize and process more speech as compared to other social robots. While this results in a more user-friendly experience, it also requires greater workloads and can incur a higher cost compared to most speech recognition engines. But with the updated billing system that Speech-to-Text delivers, we are able to continue refining Romi’s speech recognition accuracy while keeping costs low. &lt;/p&gt;&lt;h3&gt;Improving data analysis with BigQuery&lt;/h3&gt;&lt;p&gt;Google Cloud was only used for speech recognition initially, but as Romi’s range of service expanded, more aspects of Romi were hosted on Google Cloud. Among these features, the machine learning platform for AI was moved to Google Cloud at an early stage. To be able to make use of a cloud platform at an affordable cost makes Google Cloud very appealing. Premium Support and technical account management helped us with our cost considerations.&lt;/p&gt;&lt;p&gt;Furthermore, MIXI started migrating the data analysis platform for Romi to BigQuery last year. &lt;a href="https://cloud.google.com/bigquery"&gt;BigQuery&lt;/a&gt; was chosen because it excels at bringing together and analyzing big data in various formats, as in-depth data analysis becomes necessary to improve Romi’s services. What also makes BigQuery an attractive choice was the ability to introduce structured query language (SQL) to BigQuery, a language that the development team from MIXI is familiar with. &lt;/p&gt;&lt;p&gt;In particular, we are grateful for the use of software like &lt;a href="https://www.looker.com/google-cloud/" target="_blank"&gt;Looker&lt;/a&gt;. It takes a lot of work, even for engineers, to write complex queries, but with Looker, even non-engineers can intuitively perform fairly complex analysis. About half a year ago, we held regular briefings mainly for employees interested in data analysis, and now they voluntarily conduct analysis, conduct discussions based on the results, and create new projects and ideas. This has become a regular workflow for us.&lt;/p&gt;&lt;p&gt;Currently, what is popular in AI-based communication is the emergence of large-scale language models (LLMs) that learn from huge amounts of data, and generate natural responses on a different level than before. &lt;/p&gt;&lt;p&gt;To improve the conversational experience with Romi, we have been looking into relevant LLM technologies for a while now. It is important to be able to use high performance GPUs as inexpensively as possible in order to run PoC at high speed. We will continue to focus on Google Cloud services, including Compute Engine and VertexAI.&lt;/p&gt;&lt;/div&gt;</description><pubDate>Mon, 07 Aug 2023 16:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/application-modernization/driving-more-natural-conversations-with-speech-recognition/</guid><category>Media &amp; Entertainment</category><category>Application Modernization</category><media:content height="540" url="https://storage.googleapis.com/gweb-cloudblog-publish/images/MIXI.max-600x600.jpg" width="540"></media:content><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Making social robot conversations more natural with Speech-to-Text</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/application-modernization/driving-more-natural-conversations-with-speech-recognition/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Harumitsu Nobuta</name><title>Manager of Development Group, Romi Department, Vantage Studio, MIXI</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Shinji Sakaguchi</name><title>SRE Group, CTO's office, Development Department, MIXI</title><department></department><company></company></author></item><item><title>How SEEN scaled output 89x and reduced GPU costs by 66% using Google Kubernetes Engine</title><link>https://cloud.google.com/blog/products/containers-kubernetes/startup-scales-personalized-video-output-with-gke/</link><description>&lt;div class="block-paragraph"&gt;&lt;p&gt;At &lt;a href="https://seen.io/" target="_blank"&gt;SEEN&lt;/a&gt;, we’re in the personalized video business.&lt;/p&gt;&lt;p&gt;We serve a wide range of clients who need to render and stream a high volume of unique, high definition videos that leverage data points — like name, gender, purchase history, etc. — to speak directly to their customers and constituents.&lt;/p&gt;&lt;p&gt;For some of these campaigns, we need to render and stream hundreds of thousands — or millions — of videos in just a few days, or even in just a few hours. To do so, we leverage an adaptable, scalable, and efficient cloud-based architecture built on Google Cloud and &lt;a href="http://cloud.google.com/kubernetes-engine"&gt;Google Kubernetes Engine&lt;/a&gt; (GKE).&lt;/p&gt;&lt;p&gt;In this blog, we’ll walk you through:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Why we needed to replace our legacy bare metal architecture with a new cloud-based architecture built on Google Cloud and GKE&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Why we chose Google Cloud as our cloud services partner&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;How Google Cloud helped us design and implement our new architecture, and what it looks like&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The quantitative and qualitative benefits we’ve experienced leveraging Google Cloud and GKE&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Let’s dig in.&lt;/p&gt;&lt;h3&gt;Our challenge: Rendering personalized videos at speed and scale&lt;/h3&gt;&lt;p&gt;Every startup needs an ambitious goal.&lt;/p&gt;&lt;p&gt;At SEEN, we want to render and stream millions of &lt;a href="https://seen.io/what-is-personalized-video" target="_blank"&gt;individual personalized videos&lt;/a&gt; to millions of individual people in just a few seconds.&lt;/p&gt;&lt;p&gt;This goal is ambitious, but it isn’t just another “moonshot” designed to sound impressive — achieving this goal is critical to our company’s growth and future.&lt;/p&gt;&lt;p&gt;Here’s why.&lt;/p&gt;&lt;p&gt;When we first launched SEEN, we attracted smaller clients who needed to render and stream a relatively low volume of personalized videos.&lt;/p&gt;&lt;p&gt;To serve these clients, we built a basic architecture based on colocated machines. Our system worked — it could render and stream thousands of cinema-quality personalized videos sent over the course of a few weeks— but it was highly manual and time consuming to operate, and it slowed us down and didn’t scale very well.&lt;/p&gt;&lt;p&gt;This legacy architecture became a real problem as we grew and began to attract larger companies with larger projects. To serve these new clients we needed to generate and stream many more personalized videos at a much faster rate. For example, one client needed 1.8 million videos in a few days, while another — a fitness app — needed 100 million videos in one day. &lt;/p&gt;&lt;p&gt;Projects at that speed and scale were impossible to consider with our legacy architecture, but to grow SEEN to the next level we needed to find a way to capture these enterprise clients and deliver on these types of projects. To do that, we needed to rebuild our old system from the ground up with a modern, efficient, and scalable cloud-based architecture.&lt;/p&gt;&lt;p&gt;Here's how we did it.&lt;/p&gt;&lt;h3&gt;Searching for a new partner: How (and why) we chose Google Cloud&lt;/h3&gt;&lt;p&gt;As soon as we realized we needed to rebuild our fundamental architecture we began to search for a cloud services provider. We knew that finding the right partner — with the right products and support — could solve a lot of problems for us and accelerate the process of designing, building, and deploying our new video rendering system. &lt;/p&gt;&lt;p&gt;To start our search, we drew up a list of what we were looking for in an ideal cloud services partner. Our list included:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;The basics&lt;/b&gt;, like sustainability, security, and a local presence in Europe to help mitigate GDPR and other concerns (most of our clients were in Europe at the time).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Next-gen hardware&lt;/b&gt; like NVIDIA GPUs, Kubernetes products, and emerging technical solutions like GPU time sharing and with more NVIDIA products on the roadmap. &lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;White-glove support&lt;/b&gt; with direct collaboration and guidance to help us solve both general and niche problems in our system.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This last point was key to deciding which service provider we would choose. At the time we had a small-but-mighty team of four developers. While they had significant expertise with the cloud and rendering engines, at the time they just didn’t have the bandwidth necessary to bring our vision to life. &lt;/p&gt;&lt;p&gt;With these requirements in hand, we reached out to cloud services providers, including Google Cloud.&lt;/p&gt;&lt;p&gt;We met with other providers, but Google Cloud felt different from the start. We met with them in-person, where we held productive conversations about our existing legacy architecture. We presented our new architecture’s design, and they provided insights on how one of their solutions — Google Kubernetes Engine (GKE) — would fit well within it.&lt;/p&gt;&lt;p&gt;Looking back, the feedback and treatment we received from Google Cloud was miles ahead of other vendors. It felt like they really understood our use case, and really wanted to partner with us one-on-one to bring our vision to life. In addition, they had the right product suite for our needs and offered us early access to features that would fit our scalability needs such as emerging GPUs from NVIDIA. &lt;/p&gt;&lt;p&gt;The choice was clear. We selected Google Cloud and got to work.&lt;/p&gt;&lt;h3&gt;Our new solution: How we render personalized videos today&lt;/h3&gt;&lt;p&gt; Google Cloud’s hands-on support and personalized attention didn’t stop after we signed our contract. Their teams have remained hands-on throughout the entire process of designing, deploying, and extending our new architecture.&lt;/p&gt;&lt;p&gt;As we built our new system, Google Cloud consultants:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Constantly reviewed and commented on our architecture designs and ongoing implementation process&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Pointed us to products that became key components of our new architecture, such as GPU timesharing&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Provided input as we solved small problems like configuring NVIDIA drivers to work with Kubernetes, and rewriting parts of our rendering engine to leverage Google Kubernetes Engine (GKE)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Continued to give us early previews of upcoming Kubernetes and GPU features, as well as their newest NVIDIA chips and machines &lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Our engineers worked hand-in-hand with Google Cloud's consultants to redesign our architecture from the ground up. Ultimately, we built on Google Cloud and created a cloud-based architecture capable of generating unique, personalized videos at a scale and speed we never achieved before. &lt;/p&gt;&lt;p&gt;While our new architecture is proprietary — and we need to keep it close-to-the-chest for competitive reasons — we can share a few of its key technical components. &lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;We created &lt;b&gt;worker pods&lt;/b&gt; that each perform a single rendering task at a time, and pull their jobs from Pub/Sub.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;We use &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler"&gt;&lt;b&gt;GKE’s auto-scaling&lt;/b&gt;&lt;/a&gt; to rapidly scale from 1 to a virtually infinite number of nodes, and tailor their performance at the level of individual workloads. This gives us granular and responsive control over the compute power we deploy (and over our costs).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;We use &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/timesharing-gpus"&gt;&lt;b&gt;GPU time-sharing&lt;/b&gt;&lt;/a&gt; between pods to leverage our compute resources as efficiently and effectively as possible — increasing our average GPU utilization by 1.6x and lowering our costs by 66%. &lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;By leveraging these services, our system now runs much faster, and is more efficient, scalable, reliable, and adaptable than before. With it we’ve transformed the products we offer, the scale and speed we can promise, and the clients we can serve — and it’s already delivered some significant technical and business outcomes. &lt;/p&gt;&lt;h3&gt;What we've achieved by partnering with Google Cloud&lt;/h3&gt;&lt;p&gt;By partnering Google Cloud's consultants and rebuilding our architecture on Google Cloud and GKE, we’ve achieved outcomes that include:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Generating 8800% more videos rendered per hour&lt;/b&gt;. With our old system, we could only render ~6,000 videos per hour. In a recent test of our new system, we effortlessly rendered 540,000 videos in one hour (and we know it's capable of rendering far, far more in that time frame).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Developing enterprise-class capacity&lt;/b&gt;. With our new system, we can now render and stream millions of personalized videos at speed and scale. With it we’ve been able to serve leading global companies like BMW, WWF, Redbull, Red Cross, Coop, ICA, and Action Against Hunger — and take on bigger projects, like sending &lt;a href="https://seen.io/case/ica-personalized-loyalty-video" target="_blank"&gt;2+ million personalized videos&lt;/a&gt; for the largest food retailer in Sweden.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Clearing bottlenecks to our business&lt;/b&gt;. Before, we could only generate videos for one or two campaigns at a time. Now, we can comfortably handle as many campaigns as we’d like at one time — which means we no longer need to turn down projects and have increased our revenue.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Building and offering new products&lt;/b&gt;. We can expand our portfolio with powerful new products. For example, we can now build real-time personalized videos that stream seconds after the viewer inputs their data and presses “play” letting us scale new, dynamic products.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Improving system visibility&lt;/b&gt;. We’ve increased our logs, alerts, and overall visibility into render status for our huge campaigns (something that was difficult with our legacy architecture).&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;In sum&lt;/b&gt;: By partnering with Google Cloud, we have laid the groundwork to achieve our most ambitious technical and business goals — and we have already dramatically improved how we render and stream a huge volume of cinema-quality personalized videos, and how we serve our biggest and most demanding clients.&lt;/p&gt;&lt;h3&gt;Bring these results to your organization&lt;/h3&gt;&lt;p&gt;Today, we’re building more of our fundamental architecture on Google’s Cloud services, and improving its core performance so we can scale our personalized video production to insane levels. Our big, ambitious goal no longer seems out of reach. In fact, we now believe we will soon be able to render our cinema-quality personalized videos in .2 seconds or less. &lt;/p&gt;&lt;p&gt;To get there, we’re continuing our close partnership with the Google Cloud team. They are helping us develop new scaling strategies, giving us critical best practices to apply to our architecture, and guiding our development of new products like real-time rendering.&lt;/p&gt;&lt;p&gt;Together, we’ve transformed how our core products function — and you can do the same by partnering with Google Cloud’s teams, by building on Google Cloud and by using GKE. It’s worked for us — big time — and we know it can work for you too.&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;&lt;i&gt;&lt;sup&gt;Special thank you to Alina Bylkova, Lekë Dobruna, Gabor Lossos, Vigan Sokoli, John Rowley, and Michael Ivanov.&lt;/sup&gt;&lt;/i&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Fri, 19 May 2023 16:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/containers-kubernetes/startup-scales-personalized-video-output-with-gke/</guid><category>Media &amp; Entertainment</category><category>Customers</category><category>Startups</category><category>Containers &amp; Kubernetes</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>How SEEN scaled output 89x and reduced GPU costs by 66% using Google Kubernetes Engine</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/containers-kubernetes/startup-scales-personalized-video-output-with-gke/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Ronald Griffin</name><title>CTO, SEEN</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Sahil Bajaj</name><title>Engineering Lead, SEEN</title><department></department><company></company></author></item><item><title>Picture this: How media companies can render faster — for less — with cloud-based NFS caching</title><link>https://cloud.google.com/blog/products/media-entertainment/gunpowder-uses-knfsd-caching-to-help-media-customers-use-cloud/</link><description>&lt;div class="block-paragraph"&gt;&lt;p&gt;Content creators are by nature unique. So it should come as no surprise that content creation companies also have unique needs — especially when it comes to building out the infrastructure they need to deliver their work on time, and on budget. &lt;/p&gt;&lt;p&gt;Gunpowder works with all manner of creators — advertising agencies, Visual Effects (VFX) studios, media, and even individual artists — to help them configure the compute, storage and networking capacity that powers their ideas, both on-premises and in the cloud. As a Google Cloud partner, we recently started using an open-source NFS caching system called &lt;a href="https://github.com/GoogleCloudPlatform/knfsd-cache-utils" target="_blank"&gt;knfsd&lt;/a&gt; with several of our clients, and found that it can help solve two high-level challenges that creative companies face: &lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Harnessing excess capacity&lt;/b&gt;: Whether it’s as a long-term solution, as part of a migration, or scaling with cloud resources to accommodate a short-term deadline, creatives can augment on-prem compute and storage with a hybrid cloud configuration. &lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Controlling costs&lt;/b&gt;: Media companies can save on both storage and compute costs, by chasing lower-cost, ephemeral compute resources in the cloud, minimizing storage outlays, and keeping network changes low.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;To illustrate, read on for examples from some of the creative companies we work with, plus a behind-the-scenes look at how the virtual caching appliance works, and how to start using it as part of your own content creation workflow. &lt;/p&gt;&lt;h3&gt;Bursting to the cloud with Refik Anadol, media artist with a global conscience&lt;/h3&gt;&lt;p&gt;Earlier this year, renowned Turkish media artist Refik Anadol was invited to present his work at the World Economic Forum, a.k.a. Davos. Refik envisioned an AI-generated art installation that looked at the plight of coral reefs under climate change, utilizing approximately 100 million coral images as raw data. &lt;/p&gt;&lt;p&gt;Refik had just three weeks to produce the piece, but it would have taken the servers in his Los Angeles studio at least six to complete the render. Refik turned to Gunpowder to help him meet his deadline. We identified low-cost &lt;a href="https://cloud.google.com/spot-vms"&gt;Spot VM&lt;/a&gt; capacity at Google Cloud’s Oregon data center (which has the added benefit of being powered by 100% carbon-free energy), and set up knfsd to cache the data there. Refik used up to 250 T4 GPUs to render the final piece, “Artificial Realities: Coral Installation,” delivering it in time for the event, all while using sustainably powered resources.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/1-Artificial-Realities---Coral---24.max-1000x1000.jpg"
        
          alt="1-Artificial-Realities---Coral---24.jpg"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;i&gt;Still from Artificial Realities, courtesy of Refik Anadol&lt;/i&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;h3&gt;Controlling costs at cloud-first VFX studio House of Parliament &lt;/h3&gt;&lt;p&gt;In the past year alone, House of Parliament has completed nine Super Bowl commercials, and generated all of the visual effects for the video for Taylor Swift’s song, ‘Lavender Haze,” among other projects.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/2-Paramount_Stallone_Face_SB.max-1000x1000.jpg"
        
          alt="2-Paramount_Stallone_Face_SB.jpg"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;i&gt;A still from a Paramount Plus Superbowl commercial, courtesy of House of Parliament&lt;/i&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p&gt;House of Parliament is 100% in Google Cloud, but which cloud region depends on the day. Demand for compute fluctuates depending on the project at hand, so Parliament is always chasing the region with available capacity at the lowest Spot price (Spot VMs are up to 91% cheaper than regular instances). Knfsd helps with that too. The caching appliance sits in front of its main file system in us-west2 (Los Angeles), to which the VM render nodes connect as they work on their daily jobs. This way, the render nodes can work off the House of Parliament’s full storage array without actually having to provision storage in the region or copy data there, which would have incurred additional costs and slowed down the workflow. &lt;/p&gt;&lt;h3&gt;Caching solution overview — the geeky stuff&lt;/h3&gt;&lt;p&gt;The cloud caching solution is relatively simple. Before, when customers wanted to use the cloud to scale existing compute, ‘render workers’ running in Google Cloud mounted and read files directly from on-prem NFS file servers. To minimize latency and optimize data transfer, customers provisioned a high-bandwidth pipe, using &lt;a href="https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview"&gt;Cloud VPN&lt;/a&gt;, or &lt;a href="https://cloud.google.com/network-connectivity/docs/interconnect/concepts/dedicated-overview"&gt;Dedicated Interconnect&lt;/a&gt;, between their on-prem data center and Google Cloud. While this solution certainly works, it can suffer from network latency, and the extra requests put a strain on customer’s on-prem NFS storage arrays.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/image5_F1GOxE6.max-1000x1000.png"
        
          alt="image5"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;i&gt;&lt;b&gt;On the left:&lt;/b&gt; Each VM mounts directly to on-premises file server. On the right: Each VM mounts the virtual caching appliance sitting in Google Cloud, in front of the on-premises file server.&lt;/i&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p&gt;Now, instead of the remote workers going back to the on-prem data center for their data, they first check in to see if the data they need is available from the cloud-based knfsd virtual caching appliance. This dramatically cuts down on the number of calls back to the on-prem file servers that the render workers need to make, enabling a reduction in the size of VPN or Dedicated Interconnects, improving throughput, and reducing I/O to and from on-prem file servers.  &lt;/p&gt;&lt;p&gt;Knfsd uses two existing Linux kernel modules: nfs-kernel-server (the standard Linux NFS Server), which supports NFS re-exporting; and cachefilesed (FS-Cache), which provides a persistent cache of network filesystems on disk. It works by mounting NFS exports from a source NFS filer (typically located on-prem) and re-exporting the mount points to downstream NFS clients (typically in Google Cloud). By re-exporting, the solution provides two layers of caching:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Level 1: The standard block cache of the operating system, residing in RAM.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Level 2: FS-Cache. A Linux kernel module which caches data from network filesystems locally on disk.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;When the volume of data exceeds available RAM (L1), FS-Cache simply caches the data on the disk, making it possible to cache terabytes of data. By leveraging &lt;a href="https://cloud.google.com/compute/docs/disks/local-ssd"&gt;Local SSDs&lt;/a&gt;, a single cache node can serve up to 9TB of data at blazingly fast speeds.&lt;/p&gt;&lt;h2&gt;Caching appliance FTW &lt;/h2&gt;&lt;p&gt;Implementing an NFS cache like knfsd brings on all sorts of benefits. Let’s take a look at six of them:&lt;/p&gt;&lt;h3&gt;1. Exponentially scale storage speeds&lt;/h3&gt;&lt;p&gt;In the customer stories above, we discussed how NFS caching was used to accelerate access to files.&lt;/p&gt;&lt;p&gt;For the on-prem filer to cloud-based cache pattern (e.g., Refik Anadol), we’ve seen as much as a 15X performance improvement from leveraging an NFS cache.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/4_15X.max-1000x1000.png"
        
          alt="4 15X.png"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;i&gt;In the image above, the blue line shows a combined throughput of 2.78GiB/s traversing our customer’s bonded dedicated interconnects from an on-premises filer to their knfsd cache nodes. The red line shows throughput from the caches to render hosts at a peak of over 42GiB/s, a more than 15X improvement.&lt;/i&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p&gt;For the cloud-based use case like House of Parliament, with a conservatively provisioned filer to cloud-based cache, we’ve seen as much as a 150X performance improvement by leveraging an NFS cache.&lt;br/&gt;&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/5_150X.max-1000x1000.png"
        
          alt="5 150X.png"&gt;
        
        &lt;/a&gt;
      
        &lt;figcaption class="article-image__caption "&gt;&lt;i&gt;The left image above shows a throughput of around 2GiB/s being served from a conservatively provisioned, cloud-based filer to the cache nodes. The right image shows throughput from the caches to render hosts at a peak of over 300GiB/s, an over 150X improvement.&lt;/i&gt;&lt;/figcaption&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;h3&gt;2. Stop over provisioning steady-state storage &lt;/h3&gt;&lt;p&gt;Traditionally, content creators have been forced to plan very carefully for how many resources (physical machines) they need to get a project done, and to match capital expenditures to their compute and storage needs. Even if you get this right at the start of a project, production needs seem to change constantly and playing this resource management game has been a challenging task — especially when you’re balancing multiple projects.&lt;/p&gt;&lt;p&gt;Cloud makes it much easier to scale compute to match these shifting requirements. And with NFS caching, you can easily scale storage size and performance to match as well, without having to provision more storage or perform large data migrations.&lt;/p&gt;&lt;/div&gt;
&lt;div class="block-image_full_width"&gt;






  
    &lt;div class="article-module h-c-page"&gt;
      &lt;div class="h-c-grid"&gt;
  

    &lt;figure class="article-image--large
      
      
        h-c-grid__col
        h-c-grid__col--6 h-c-grid__col--offset-3
        
        
      "
      &gt;

      
      
        
        &lt;img
            src="https://storage.googleapis.com/gweb-cloudblog-publish/images/6_spikyWorkloads.max-1000x1000.png"
        
          alt="6 spikyWorkloads.png"&gt;
        
        &lt;/a&gt;
      
    &lt;/figure&gt;

  
      &lt;/div&gt;
    &lt;/div&gt;
  




&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;h3&gt;3. Save money on compute&lt;/h3&gt;&lt;p&gt;NFS caching speeds up data throughput, thus decreasing render times and overall job costs. In addition, it can help manage latency, enabling jobs to move to remote data centers with more low-priced Spot VM capacity. This is possible because many of our customers integrate checkpointing for their render jobs, ensuring their workloads are fault tolerant, and thus suitable for Spot VMs  &lt;/p&gt;&lt;h3&gt;4. Save money on networking &lt;/h3&gt;&lt;p&gt;A lot of workloads reuse files many times over, for any number of processes. By moving the data once to the cloud, then accessing it multiple times from the NFS cache, we lower the amount of data which needs to traverse a VPN or dedicated interconnect. This allows us to rightsize the network and provision a smaller connection.&lt;/p&gt;&lt;h3&gt;5. Maintain artist productivity &lt;/h3&gt;&lt;p&gt;When a storage system is overloaded, artists can’t do their work. By offloading I/Ops and throughput to the caches, we keep these systems and their most important stakeholders, our customers’ employees, working as intended.&lt;/p&gt;&lt;h3&gt;6. Fit into existing pipelines&lt;/h3&gt;&lt;p&gt;By fully leveraging native NFS, knfsd seamlessly integrates with existing workflows. You don’t need to install any additional software tools on the render nodes’ VMs. For reads, you don’t need to modify paths or filenames when moving jobs between datacenters — everything is accessed in the same way. And when writing new files, the system writes all data directly back to the source-of-truth file.&lt;/p&gt;&lt;h2&gt;Looking ahead&lt;/h2&gt;&lt;p&gt;Today, a lot of creative companies still look longingly at the cloud as a way to complement their on-prem compute resources. But between steep learning curves, engineering resource constraints, and concerns about wide area network latency and bandwidth costs, using the cloud can seem out of reach.&lt;/p&gt;&lt;p&gt;At Gunpowder Technology, we focus on removing these technical and resource barriers for our customers, accelerating their ability to get past the tech and back to focusing on delivering to their fullest creative potential. Knfsd is available as an &lt;a href="https://github.com/GoogleCloudPlatform/knfsd-cache-utils" target="_blank"&gt;open source repository&lt;/a&gt;, so if you have the engineering expertise and resources, you can integrate it yourself into a custom pipeline.&lt;/p&gt;&lt;p&gt;But if you’d appreciate leaning on our years of experience and letting us do the heavy lifting for you, reach out to &lt;a href="mailto:info@gunpowder.tech"&gt;info@gunpowder.tech&lt;/a&gt; to discuss how we can custom tailor a solution for you.&lt;/p&gt;&lt;p&gt;&lt;i&gt;We’re excited to collaborate on these use cases with you. To learn more about how to deploy and manage the open source NFS caching system, see the &lt;a href="https://cloud.google.com/architecture/deploy-nfs-caching-proxy-compute-engine"&gt;single node tutorial&lt;/a&gt; or the Terraform-based deployment scripts hosted on &lt;a href="https://github.com/GoogleCloudPlatform/knfsd-cache-utils" target="_blank"&gt;GitHub&lt;/a&gt;. Or for an overview and demo of the solution, be sure to reach out to your sales team, or Gunpowder.tech.&lt;/i&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Thu, 18 May 2023 16:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/media-entertainment/gunpowder-uses-knfsd-caching-to-help-media-customers-use-cloud/</guid><category>Open Source</category><category>Storage &amp; Data Transfer</category><category>Networking</category><category>Media &amp; Entertainment</category><media:content height="540" url="https://storage.googleapis.com/gweb-cloudblog-publish/images/gunpowder.max-600x600.jpg" width="540"></media:content><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Picture this: How media companies can render faster — for less — with cloud-based NFS caching</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/media-entertainment/gunpowder-uses-knfsd-caching-to-help-media-customers-use-cloud/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Brennan Doyle</name><title>Solutions Architect</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Tom Taylor</name><title>Founder, Gunpowder Technologies</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Tom Taylor</name><title>Founder, Gunpowder Technologies</title><department></department><company></company></author></item><item><title>Game-changing IT security with Unity, Orca Security, and Google Cloud</title><link>https://cloud.google.com/blog/topics/partners/game-changing-it-security-with-unity-orca-security-and-google-cloud/</link><description>&lt;div class="block-paragraph"&gt;&lt;p&gt;Every month, more than 1 million creators worldwide use &lt;a href="https://unity.com/" target="_blank"&gt;Unity’s expansive platform&lt;/a&gt; to develop games, create beautiful visual effects, and design everything from electric cars to skyscrapers. The company’s comprehensive suite of solutions makes it easier to create, run, and monetize 2D and 3D content.&lt;/p&gt;&lt;p&gt;Unity has long recognized the importance of security not just to protect critical networks and sensitive information, but also to maintain an always-on, undistracted experience for its users. Every millisecond counts for the platform that is used to make &lt;a href="https://seekingalpha.com/article/4502532-unity-software-much-more-than-a-games-company" target="_blank"&gt;70% of the top 1,000 mobile games&lt;/a&gt; globally, and its infrastructure needs to be optimally resilient against downtime and security threats.&lt;/p&gt;&lt;p&gt;That’s why Unity selected &lt;a href="https://cloud.google.com/"&gt;Google Cloud&lt;/a&gt; and partner&lt;a href="https://orca.security/" target="_blank"&gt; Orca Security&lt;/a&gt; to safeguard cloud workloads, data, and users across multi-cloud development and runtime environments. Let’s take a look at how the partnership between Orca and Google Cloud helps Unity maintain optimal visibility across their IT landscape for more reliable, secure, and dynamic performance.&lt;/p&gt;&lt;h3&gt; A ‘better together’ approach to security&lt;/h3&gt;&lt;p&gt;We know that security is a team effort, and we work with our robust ecosystem of Google Cloud partners to provide customers with the best possible solutions to each of their unique needs. Orca is a powerful part of that ecosystem that is built on Google Cloud and leverages integrations with our native security capabilities.&lt;/p&gt;&lt;p&gt;The Google Cloud partnership with Orca benefits customers like Unity thanks to how Orca integrates with Google Cloud security solutions like &lt;a href="https://cloud.google.com/solutions/security-information-event-management"&gt;Chronicle SIEM&lt;/a&gt; and &lt;a href="https://cloud.google.com/security-command-center"&gt;Security Command Center&lt;/a&gt;, and brings the data into Orca’s Unified Data Model and &lt;a href="https://orca.security/platform/agentless-sidescanning/" target="_blank"&gt;SideScanning&lt;/a&gt; that supports Google Cloud workloads.&lt;/p&gt;&lt;p&gt;“We want to adopt capabilities that deliver security across Unity as fast as possible, and without the partnership between Orca and Google Cloud, we would not be able to do so quickly, proactively, and efficiently,” said Justin Somaini, chief security officer at Unity. “We recognize the valuable capabilities coming from Orca that are born out of the relationship between Orca and Google Cloud. We benefit from their partnership.”&lt;/p&gt;&lt;p&gt;Orca combines with Google Cloud &lt;a href="https://cloud.google.com/architecture/framework/security"&gt;secure-by-design infrastructure&lt;/a&gt; to help customers like Unity keep a close eye on all assets across multi-cloud environments, safeguarding data, networks, and end users from threats as they arise.&lt;/p&gt;&lt;h3&gt;Advanced use of APIs for security and performance monitoring&lt;/h3&gt;&lt;p&gt; Unity first chose to work with Orca because it needed an automated, out-of-the-box security solution that seamlessly integrates with&lt;a href="https://cloud.google.com/apis"&gt; Google Cloud APIs&lt;/a&gt; and provides full asset visibility. Orca also proved to be the right solution thanks to its efficient scaling across Unity’s entire digital and physical infrastructure, including multi-cloud environments, applications, and endpoints. &lt;/p&gt;&lt;p&gt;Orca leverages Google Cloud API updates to introduce new features and capabilities that go far beyond identifying security risks and preventing attacks such as denial-of-service and ransomware. For example, Orca reveals idle, paused, and stopped workloads, as well as orphaned applications and endpoints that require consolidation or decommissioning.&lt;/p&gt;&lt;p&gt;Unity also looked for an alternative to resource-intensive virtual security agents, as they tended to be challenging to deploy and could negatively impact performance.&lt;/p&gt;&lt;p&gt;“Orca’s Unified Data Model and SideScanning technology provides full security visibility — and coverage — across clouds and endpoints while eliminating the need for resource-intensive agents,” said Vijay Sharma, security leader at Unity. “This in turn helps us maintain peak system performance while continuously scanning for vulnerabilities, malware, misconfigurations, and weak or leaked passwords.”&lt;/p&gt;&lt;h3&gt;Improving performance and collaboration across teams&lt;/h3&gt;&lt;p&gt; Security and DevOps teams have become more unified and performant since Unity began using Orca.&lt;/p&gt;&lt;p&gt;Unity leverages Orca to build secure solutions that mitigate new threats and fully comply with strict international data protection standards such as PCI-DSS, SOC 2, and NIST.  Orca’s cross-departmental capabilities empower the security team to closely collaborate with developers on strategic, high-level projects.&lt;/p&gt;&lt;p&gt;Unity has seen an improvement in its IT security and DevOps working relationship, as the two teams access a single, centralized source of truth presented in a common language. Orca’s real-time data delivery smoothly integrates with Unity’s security workflows alongside DevOps compilers and ticketing systems like Jira.&lt;/p&gt;&lt;p&gt;“Instead of boiling the ocean, we jointly analyze the threat landscape in real time and make informed decisions about how to best secure our products with DevOps,” said Sharma.&lt;/p&gt;&lt;p&gt;The developers Unity serves have also enjoyed the benefits of this more dynamic and modernized approach to security automation. As automated cloud security is now embedded into the continuous integration and continuous delivery (CI/CD) process with Orca, developers can quickly scan Infrastructure as Code templates and container images in minutes rather than hours.&lt;/p&gt;&lt;p&gt;Orca’s constant monitoring of cloud provider logs and threat intelligence feeds help Unity proactively identify anomalous events, expediting key decision making processes. &lt;/p&gt;&lt;p&gt;“Orca’s single pane-of-glass dashboards display actionable data and alerts that further increase operational efficiency while reducing mean-time-to-resolution,” said Somaini. &lt;/p&gt;&lt;p&gt;With more than 70% of the top 1,000 mobile games globally being created with Unity, countless developers and end users will feel the positive impacts of Unity’s work with Orca and Google Cloud in the years ahead.&lt;/p&gt;&lt;p&gt;&lt;i&gt;To learn more about how partners help organizations make the most of Google Cloud, you can&lt;a href="https://cloud.google.com/partners"&gt; visit our partner page&lt;/a&gt;.&lt;/i&gt;&lt;/p&gt;&lt;/div&gt;</description><pubDate>Mon, 15 May 2023 16:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/topics/partners/game-changing-it-security-with-unity-orca-security-and-google-cloud/</guid><category>Security &amp; Identity</category><category>Media &amp; Entertainment</category><category>Partners</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Game-changing IT security with Unity, Orca Security, and Google Cloud</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/topics/partners/game-changing-it-security-with-unity-orca-security-and-google-cloud/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Apurva Dave</name><title>Director, Security Product Marketing, Google Cloud</title><department></department><company></company></author></item><item><title>Three ways media leaders can leverage generative AI</title><link>https://cloud.google.com/blog/products/media-entertainment/what-generative-ai-means-for-the-media-and-entertainment-industry/</link><description>&lt;div class="block-paragraph"&gt;&lt;p&gt;The digital era turned the traditional formula for media and entertainment success on its head, ushering in new technologies that have changed how content is produced, distributed, experienced, and monetized. Audiences have more choice, flexibility, and power over what they consume, and today’s media companies have to embrace ongoing transformation or risk falling behind – or becoming irrelevant. &lt;/p&gt;&lt;p&gt;A new wave of transformation is arriving with &lt;a href="https://cloud.google.com/ai/generative-ai"&gt;generative AI&lt;/a&gt;, a type of artificial intelligence that can interact with users in natural language and create novel data, ranging from story outlines, reports, and other text outputs to multimodal content like images, videos, and audio. Media and entertainment are inherently about content creation and creativity—so what does this new technology mean for the industry? &lt;/p&gt;&lt;p&gt;At Google Cloud, we see tremendous opportunity for creative industries, from more efficient creation methods to improved user experiences. Let’s explore.&lt;/p&gt;&lt;h3&gt;AI for media with Google Cloud&lt;/h3&gt;&lt;p&gt;Google Cloud has a long history with large language models (LLMs) and other generative AI technologies—from their influence over the years on products like &lt;a href="https://cloud.google.com/document-ai"&gt;Document AI&lt;/a&gt;, to recent announcements like &lt;a href="https://cloud.google.com/blog/products/ai-machine-learning/vertex-ai-model-garden-and-generative-ai-studio"&gt;Generative AI support in Vertex AI&lt;/a&gt;, which lets businesses access and tune generative AI foundation models, and &lt;a href="https://www.youtube.com/watch?v=kOmG83wGfTs" target="_blank"&gt;Generative AI App Builder&lt;/a&gt;, which lets developers &lt;a href="https://cloud.google.com/blog/products/ai-machine-learning/create-generative-apps-in-minutes-with-gen-app-builder"&gt;build chatbots and other generative apps in minutes&lt;/a&gt;.  &lt;/p&gt;&lt;/div&gt;
&lt;div class="block-video"&gt;



&lt;div class="article-module article-video "&gt;
  &lt;figure&gt;
    &lt;a class="h-c-video h-c-video--marquee"
      href="https://youtube.com/watch?v=yg2yHIKQ7oM"
      data-glue-modal-trigger="uni-modal-yg2yHIKQ7oM-"
      data-glue-modal-disabled-on-mobile="true"&gt;

      
        &lt;img src="//img.youtube.com/vi/yg2yHIKQ7oM/maxresdefault.jpg"
             alt="In this video, learn how Google Cloud is making it easy to access, customize, and deploy large models."/&gt;
      
      &lt;svg role="img" class="h-c-video__play h-c-icon h-c-icon--color-white"&gt;
        &lt;use xlink:href="#mi-youtube-icon"&gt;&lt;/use&gt;
      &lt;/svg&gt;
    &lt;/a&gt;

    
      &lt;figcaption class="article-video__caption h-c-page"&gt;
        
          &lt;h4 class="h-c-headline h-c-headline--four h-u-font-weight-medium h-u-mt-std"&gt;Build, tune and deploy foundation models with Vertex AI&lt;/h4&gt;
        
        
      &lt;/figcaption&gt;
    
  &lt;/figure&gt;
&lt;/div&gt;

&lt;div class="h-c-modal--video"
     data-glue-modal="uni-modal-yg2yHIKQ7oM-"
     data-glue-modal-close-label="Close Dialog"&gt;
   &lt;a class="glue-yt-video"
      data-glue-yt-video-autoplay="true"
      data-glue-yt-video-height="99%"
      data-glue-yt-video-vid="yg2yHIKQ7oM"
      data-glue-yt-video-width="100%"
      href="https://youtube.com/watch?v=yg2yHIKQ7oM"
      ng-cloak&gt;
   &lt;/a&gt;
&lt;/div&gt;

&lt;/div&gt;
&lt;div class="block-paragraph"&gt;&lt;p&gt;We've helped our global media and entertainment customers with AI for &lt;a href="https://cloud.google.com/resources/personalizing-media-for-digital-audiences-ebook"&gt;personalization&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=YyP61ECLFOc" target="_blank"&gt;search&lt;/a&gt; and &lt;a href="https://cloud.google.com/blog/products/ai-machine-learning/how-newsweek-increased-total-revenue-with-recommendations-ai"&gt;recommendations&lt;/a&gt;, predictive &lt;a href="https://cloud.google.com/bigquery"&gt;analytics&lt;/a&gt;, and much more — and with generative AI now on the rise, we have some ideas to help media leaders, technologists, and creators think about and prepare to utilize powerful AI in their work. &lt;/p&gt;&lt;h3&gt;Three lenses on innovation in media&lt;/h3&gt;&lt;p&gt;The media and entertainment industry is increasingly diverse and complex, with companies spanning over-the-top (OTT) subscription streaming services, 24-hour linear channels, live broadcasts of sporting events, digital journalism, traditional publishing, short-form user-generated social video, and more. More and more, the boundaries between these segments of the media industry are blurring — but common to them all is the focus on providing compelling content in an engaging audience experience that can be directly or indirectly monetized.&lt;/p&gt;&lt;p&gt;With this in mind, we suggest media and entertainment companies look at the application of innovative technologies like generative AI through the following three lenses:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Improving content creation, production, and management&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Enhancing and personalizing audience experiences&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Improving monetization&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3&gt;Improving content creation, production, and management&lt;/h3&gt;&lt;p&gt;Generative AI democratizes many aspects of content creation, opening new ways to create written material, illustrations, sound effects, special effects, and more. Its recent maturation has been so rapid, some in the media industry have expressed concern that generative AI implies the end of creative professions. We think the opposite is more likely: just as photography, audio recordings, and computer generated images have enabled new modes of creativity, rather than making old ones obsolete, generative AI has the potential to both enable new forms of expression and enhance familiar ones. &lt;/p&gt;&lt;p&gt;For example, journalists could use generative AI to speed up research by helping them synthesize and analyze large volumes of information, or to help them create initial drafts or summaries of editorial content. Film and television producers could leverage the technology to accelerate the post-production editing process, with new AI-enabled interfaces for rapidly adjusting or enhancing scene details such as lighting and color. Broadcasters could use generative AI to make vast libraries of video footage searchable and accessible for use in telling more compelling stories. The potential use cases go on and on.&lt;/p&gt;&lt;p&gt;Far from undermining incredible creative professions, generative AI is poised to free writers, artists, editors, and many others from the tedious and mundane aspects of their work, empowering them to focus more of their time on creativity.&lt;/p&gt;&lt;h3&gt;Enhancing and personalizing audience experiences&lt;/h3&gt;&lt;p&gt;Every media organization in the world today faces the reality that for most consumers, switching costs are extremely low. This puts incredible pressure on these companies to invest in delivering low-friction and compelling audience experiences that help mitigate subscribers from churning and viewers from abandoning content experiences for competitive platforms. &lt;/p&gt;&lt;p&gt;Generative AI can help media companies engage and retain viewers, such as by enabling more powerful search and recommendations on their digital content platforms. With its increasingly multimodal capabilities extending from natural language to both audio and video content, generative AI is well-positioned to power more personalized audience experiences. &lt;/p&gt;&lt;p&gt;Consumers often complain about “the paradox of choice” or their inability to find something interesting to watch on streaming platforms that have incredibly vast libraries of content available on demand. Imagine a not-too-distant future wherein a consumer can simply ask the content platform they’re using to help them find a specific show to watch based on mood, specific types of scenes, combinations of actors, award nominations, or practically anything they can think to ask. And that’s just the tip of the iceberg — imagine generative AI’s potential to curate, assemble, and even create personalized content for a viewer to consume!&lt;/p&gt;&lt;h3&gt;Improving monetization&lt;/h3&gt;&lt;p&gt;As consumers’ content consumption further expands from traditional theatrical and linear television programming to include digital offerings across an array of platforms, devices, and content types, media companies face the challenge of maintaining and improving monetization. The conventional economics and approaches to advertising and subscription models are proving, in many cases, not to deliver sufficient ROI. &lt;/p&gt;&lt;p&gt;Generative AI has the potential to help media companies improve their monetization of audience experiences. As mentioned previously, enhanced personalization can play a role in mitigating churn, which in turn can help sustain and grow subscription and advertising revenues. Going beyond this, generative AI can be leveraged to drive even greater advertising revenues via more targeted, contextual, and personalized advertisements. Imagine both display and video advertisements that are generated on the fly to personalize product specifics, messaging, style, colors, and innumerable other characteristics to drive greater engagement and higher click-through rates (CTR), and thus higher advertising CPMs (cost per thousand impressions).&lt;/p&gt;&lt;h3&gt;Coming up next&lt;/h3&gt;&lt;p&gt;Generative AI presents a significant opportunity for media companies to fundamentally transform content creation, engagement, and monetization. Compelling services are already on the market — but there is far more to come. &lt;/p&gt;&lt;p&gt;Google Cloud continues to build on its deep experience and expertise with AI, and we are committed to working with the industry to develop compelling, accessible, trusted, and responsible AI solutions that will drive meaningful business outcomes. We are excited to create the future together with our global media customers and partners across the ecosystem. To learn more about this disruptive topic, read “&lt;a href="https://cloud.google.com/blog/transform/prompt-debunking-five-generative-ai-misconceptions"&gt;Debunking five generative AI misconceptions&lt;/a&gt;” from Google Cloud vice president of AI &amp;amp; Business Solutions Phil Moyer, or explore our &lt;a href="https://cloud.google.com/ai?hl=en"&gt;Trusted Tester Program for generative AI&lt;/a&gt;.&lt;/p&gt;&lt;/div&gt;</description><pubDate>Tue, 18 Apr 2023 18:00:00 +0000</pubDate><guid>https://cloud.google.com/blog/products/media-entertainment/what-generative-ai-means-for-the-media-and-entertainment-industry/</guid><category>AI &amp; Machine Learning</category><category>Media &amp; Entertainment</category><og xmlns:og="http://ogp.me/ns#"><type>article</type><title>Three ways media leaders can leverage generative AI</title><description></description><site_name>Google</site_name><url>https://cloud.google.com/blog/products/media-entertainment/what-generative-ai-means-for-the-media-and-entertainment-industry/</url></og><author xmlns:author="http://www.w3.org/2005/Atom"><name>Anil Jain</name><title>Managing Director, Global Strategic Industries, Google Cloud</title><department></department><company></company></author><author xmlns:author="http://www.w3.org/2005/Atom"><name>Lluis Canet</name><title>Solutions Lead, Media Analytics and AI, Google Cloud</title><department></department><company></company></author></item></channel></rss>