Posted on

Commonwealth Games 2022, Day 11: India Schedule, Events List On August 8, Live Streaming, IST Time, TV Channel

CWG 2022, Day 11: India Schedule

India will be chasing five gold medals on Day 11 of CWG 2022 and three of them will be in Badminton — women’s singles, men’s single and men’s doubles.

It will be safe to assume that India are in with a good chance to win all three of those gold medals as they will field PV Sindhu, Lakshya Sen in singles.

In the doubles, India have on their ranks Chirag Shetty and Satwik Sairaj Renkireddy, and they too are in a good space to win the yellow metal.

In table tennis also, India will pursue a gold in men’s singles. In hockey, India men will face their Australian counterparts in the gold medal match, and which will also be India’s final competitive event in CWG 2022.

India will certainly want a gold in the hockey as women team ended up with a bronze after that controversial penalty shootout against Australia in their semifinals.

However, the task — winning the gold and extracting some vengeance on Australia — will not be easy for the Indians as the Aussies are quite the formidable side.

The closing ceremony is scheduled for 12 AM midnight, bringing curtains down on a sporting event that gave us some thrills and disappointments over the last fortnight.

Here we are giving India’s Day 11 schedule at the Commonwealth Games. The matches can be accessed on Sony Sports Networks and live streaming will be on Sony LIV.

Badminton

Women’s Singles Finals: PV Sindhu vs Michelle Li – 1:20 pm

Men’s Singles Finals: Lakshya Sen vs Ng Tze Yong – 2:10 pm

Men’s Doubles Finals: Chirag/Satwik vs Sean Vendy and Ben Lane – 3:00 pm.


Table Tennis

Men’s Bronze Medal Match: G Sathiyan vs Paul Drinkhal – 3:35 pm

Men’s Gold Medal Match: Achanta Sharath Kamal vs Liam Pitchford- 4:25 pm


Hockey

India vs Australia: 5 pm


Closing Ceremony: 12 am

Posted on

FIDE Chess Olympiad 2022 Live Streaming: When and where to watch Chess events live online

FIDE Chess Olympiad 2022 Live Streaming

FIDE Chess Olympiad 2022 Live Streaming Updates: Chess Olympiad fever peaks in Chennai with Indian teams appearing primed for glory in the 44th edition of the prestigious event that starts on Thursday. With powerhouses Russia and China missing, India will field three teams each in the Open and women’s sections respectively. Though the five-time world champion and legendary Viswanathan Anand has chosen not to play and donned the mentor’s hat this time, the Indian teams, nonetheless, wear a formidable look.

Chess Olympiad 2022 live streaming details:

Where will the 44th Chess Olympiad match be played?

The 44th Chess Olympiad will be played at the Four Points by Sheraton Mahabalipuram Resort and Convention Centre, located on the East Coast Road in Chennai, Tamil Nadu.

How do I watch the live telecast of the Chess Olympiad 2022 opening ceremony?

The live telecast of the Chess Olympiad 2022 opening ceremony will be done by Doordarshan.

How do I watch the live streaming of the 44th Chess Olympiad opening ceremony?

You can watch the live streaming of the 44th Chess Olympiad on the YouTube channels of Chessbase India and FIDE.

When will the opening ceremony for the 44th Chess Olympiad 2022 begin?

The matches of the Chess Olympiad 2022 start at 3 PM IST on Friday and the opening ceremony for the tournament is scheduled to begin at 6 PM IST on Thursday.

What is the schedule for the Chess Olympiad 2022?

July 29: Round 1 at 3 p.m. (IST)

July 30: Round 2 at 3 pm (IST)

July 31: Round 3 at 3 pm (IST)

August 1: Round 4 at 3 pm (IST)

August 2: Round 5 at 3 pm (IST)

August 3: Round 6 at 3 pm (IST)

August 4: Rest Day

August 5: Round 7 at 3 pm (IST)

August 6: Round 8 at 3 pm (IST)

August 7: Round 9 at 3 pm (IST)

August 8: Round 10 at 3 pm (IST)

August 9: Round 11 at 3 pm (IST)

Posted on

Amazon Rekognition Introduces Streaming Video Events

Amazon Rekognition Introduces Streaming Video Events

AWS recently announced the general availability of Streaming Video Events, a new feature of Amazon Rekognition to provide real-time alerts on live video streams.

The managed service for image and video analysis can help camera manufacturers and service providers detect objects such as people, animals, and packages in live video streams from connected cameras. Streaming Video Events triggers a notification to the device as soon as the expected object is detected. Prathyusha Cheruku, principal product manager at AWS, explains how it works:

The service starts analyzing the video clip only when a motion event is triggered by the camera. When the desired object is detected, it sends a notification that includes the objects detected, bounding box coordinates, zoomed-in image of the objects detected, and the timestamp. The Amazon Rekognition pre-trained APIs provide high accuracy even in varying lighting conditions, camera angles, and resolutions.

Source: https://aws.amazon.com/rekognition/connected-home

Amazon Rekognition Video relies on Kinesis Video Streams to receive and process the video stream: the AWS::Rekognition::StreamProcessor type creates a stream processor used to detect and recognize faces or to find connected home labels.

To better manage the machine learning inferencing costs, customers can specify the length of the video clips to be processed (between 10 and 120 seconds) and can choose one or more objects such as people, pets, and packages, minimizing false alerts from camera motion events. Cheruku clarifies the benefit of Streaming Video Events over traditional motion detectors:

Many camera manufacturers and security service providers offer home security solutions that include camera doorbells, indoor cameras, outdoor cameras, and value-added notification services to help their users understand what is happening on their property. Cameras with built-in motion detectors are placed at entry or exit points of the home to notify users of any activity in real time, such as “Motion detected in the backyard”. However, motion detectors are noisy, can be set off by innocuous events like wind and rain, creating notification fatigue, and resulting in clunky home automation setup.

According to AWS, service providers can use the feature to create better in-app experiences, for example Alexa announcements such as “a package was detected at the front door”. In a separate article, Mike Ames, Prathyusha Cheruku, and David Robo explain how 3xLOGIC uses the new feature to provide intelligent video analytics on live video streams to monitoring agents.

Streaming Video Events is not the only new feature of Amazon Rekognition. Among the 2022 announcements, Rekognition Video added new languages for text detection, introduced new Face APIs for improved accuracy and improved content moderation.

Video Streaming Events is a feature available in a subset of AWS regions, including Northern Virginia, Ohio, Ireland and Mumbai. The label detection is charged at $0.00817/min, with minute increments. The processing of Kinesis Video Streams is charged separately.

Posted on

Amazon Rekognition Introduces Streaming Video Events

Amazon Rekognition Introduces Streaming Video Events

AWS recently announced the general availability of Streaming Video Events, a new feature of Amazon Rekognition to provide real-time alerts on live video streams.

The managed service for image and video analysis can help camera manufacturers and service providers detect objects such as people, animals, and packages in live video streams from connected cameras. Streaming Video Events triggers a notification to the device as soon as the expected object is detected. Prathyusha Cheruku, principal product manager at AWS, explains how it works:

The service starts analyzing the video clip only when a motion event is triggered by the camera. When the desired object is detected, it sends a notification that includes the objects detected, bounding box coordinates, zoomed-in image of the objects detected, and the timestamp. The Amazon Rekognition pre-trained APIs provide high accuracy even in varying lighting conditions, camera angles, and resolutions.

Source: https://aws.amazon.com/rekognition/connected-home

Amazon Rekognition Video relies on Kinesis Video Streams to receive and process the video stream: the AWS::Rekognition::StreamProcessor type creates a stream processor used to detect and recognize faces or to find connected home labels.

To better manage the machine learning inferencing costs, customers can specify the length of the video clips to be processed (between 10 and 120 seconds) and can choose one or more objects such as people, pets, and packages, minimizing false alerts from camera motion events. Cheruku clarifies the benefit of Streaming Video Events over traditional motion detectors:

Many camera manufacturers and security service providers offer home security solutions that include camera doorbells, indoor cameras, outdoor cameras, and value-added notification services to help their users understand what is happening on their property. Cameras with built-in motion detectors are placed at entry or exit points of the home to notify users of any activity in real time, such as “Motion detected in the backyard”. However, motion detectors are noisy, can be set off by innocuous events like wind and rain, creating notification fatigue, and resulting in clunky home automation setup.

According to AWS, service providers can use the feature to create better in-app experiences, for example Alexa announcements such as “a package was detected at the front door”. In a separate article, Mike Ames, Prathyusha Cheruku, and David Robo explain how 3xLOGIC uses the new feature to provide intelligent video analytics on live video streams to monitoring agents.

Streaming Video Events is not the only new feature of Amazon Rekognition. Among the 2022 announcements, Rekognition Video added new languages for text detection, introduced new Face APIs for improved accuracy and improved content moderation.

Video Streaming Events is a feature available in a subset of AWS regions, including Northern Virginia, Ohio, Ireland and Mumbai. The label detection is charged at $0.00817/min, with minute increments. The processing of Kinesis Video Streams is charged separately.

Posted on

NoSQL, NoMQ: Palo Alto Networks’ New Event Streaming Paradigm

NoSQL, NoMQ: Palo Alto Networks’ New Event Streaming Paradigm

Global cybersecurity leader Palo Alto Networks processes terabytes of network security events each day. It analyzes, correlates and responds to millions of events per second — many different types of events, using many different schemas, reported by many different sensors and data sources. One of its many challenges is understanding which of those events actually describe the same network “story” from different viewpoints.

Cynthia Dunlop

Cynthia has been writing about software development and testing for much longer than she cares to admit. She’s currently senior director of content strategy at ScyllaDB.

Accomplishing this would traditionally require both a database to store the events and a message queue to notify consumers about new events that arrived into the system. But to mitigate the cost and operational overhead of deploying yet another stateful component to its system, Palo Alto Networks’ engineering team decided to take a different approach.

This article explains why and how Palo Alto Networks completely eliminated the MQ layer for a project that correlates events in near real time. Instead of using Kafka, it decided to use an existing low-latency distributed database as an event data store and as a message queue. It’s based on the information that Daniel Belenky, principal software engineer at Palo Alto Networks, recently shared at ScyllaDB Summit.

Background: Events, Events Everywhere

Belenky’s team develops the initial data pipelines that receive the data from endpoints, clean the data, process it and prepare it for further analysis in other parts of the system. One of their top priorities is building accurate stories.

As Belenky explained, “We receive multiple event types from multiple different data sources. Each of these data sources might be describing the same network session but from different points on the network. We need to know if multiple events — say, one event from the firewall, one event from the endpoint and one event from the cloud provider — are all telling the same story from different perspectives.” Their ultimate goal is to produce one core enriched event that comprises all the related events and their critical details.

For example, assume a router’s sensor generates a message (here, it’s two DNS queries). Then one second later, a custom system sends a message indicating that someone performed a log-in and someone else performed a sign-up. After 8 minutes, a third sensor sends another event: some HTTP logs. All these events, which arrived at different times, might actually describe the same session and the same network activity.

Different events might describe the same network activity in different ways.

The system ingests the data reported by the different devices at different times and normalizes it to a canonical form that the rest of the system can process. But there’s a problem: This results in millions of normalized but unassociated entries. There’s a ton of data across the discrete events, but not (yet) any clear insight into what’s really happening on the network and which of those events are cause for concern.

Palo Alto Networks needed a way to group unassociated events into meaningful stories about network activity.

Evolving from Events to Stories

Why is it so hard to associate discrete entries that describe the same network session?

  • Clock skew across different sensors: Sensors might be located across different data centers, computers and networks, so their clocks might not be synchronized to the millisecond.
  • Thousands of deployments to manage: Given the nature of its business, Palo Alto Networks provides each customer a unique deployment. This means that the solution must be optimized for everything from small deployments that process bytes per second to larger ones that process gigabytes per second.
  • Sensor’s viewpoint on the session: Different sensors have different perspectives on the same session. One sensor’s message might report the transaction from point A to point B, and another might report the same transaction in the reverse direction.
  • Zero tolerance for data loss:  For a cybersecurity solution, data loss could mean undetected threats. That’s simply not an option for Palo Alto Networks.
  • Continuous out-of-order stream: Sensors send data at different times, and the event time (when the event occurred) is not necessarily the same as the ingestion time (when the event was sent to the system) or the processing time (when they were able to start working on this event).

The gray events are related to one story, and the blue events are related to another story. Note that while the gray ones are received in order, the blue ones are not.

From an application perspective, what’s required to convert the millions of discrete events into clear stories that help Palo Alto Networks protect its clients? From a technical perspective, the system needs to:

  1. Receive a stream of events.
  2. Wait some amount of time to allow related events to arrive.
  3. Decide which events are related to each other.
  4. Publish the results.

Additionally, there are two key business requirements to address. Belenky explained, “We need to provide each client a single-tenant deployment to provide complete isolation. And we need to support deployments with everything from several KB per hour up to several GBs per second at a reasonable cost.”

Belenky and team implemented and evaluated four different architectural approaches for meeting this challenge:

  • Relational database
  • NoSQL + message queue
  • NoSQL + cloud-managed message queue
  • NoSQL, no message queue

Let’s look at each implementation in turn.

Implementation 1: Relational Database

Using a relational database was the most straightforward solution — and also the easiest to implement. Here, normalized data is stored in a relational database, and some periodic tasks run complex queries to determine which events are part of the same story. It then publishes the resulting stories so other parts of the system can respond as needed.

Implementation 1: Relational Database

Pros

  • The implementation was relatively simple. The Palo Alto Network stream deployed a database and wrote some queries but didn’t need to implement complex logic for correlating stories.

Cons

  • Since this approach required them to deploy, maintain and operate another database, it would cause considerable operational overhead. Over time, this would add up.
  • Performance was limited since relational database queries are slower than queries on a low-latency NoSQL database like ScyllaDB.
  • They would incur higher operational cost since complex queries require more CPU and are thus more expensive.

Implementation 2: NoSQL + Message Queue

Next, they implemented a solution with ScyllaDB as a NoSQL data store and Kafka as a message queue. Like the first solution, normalized data is stored in a database — but in this implementation, it’s a NoSQL database instead of a relational database. In parallel, they publish the keys that will later allow them to fetch those event records from the database. Each row represents one event from different sources.

Implementation 2: NoSQL + Message Queue

Multiple consumers read the data from a Kafka topic. Again, this data contains only the key — just enough data to allow those consumers to fetch those records from the database. These consumers then get the actual records from the database, build stories by determining the relations between those events and publish the stories so that other system components can consume them.

Why not store the records and publish the records directly on Kafka? Belenky explained, “The problem is that those records can be big, several megabytes in size. We can’t afford to run this through Kafka due to the performance impact. To meet our performance expectations, Kafka must work from memory, and we don’t have much memory to give it.”

Pros

  • Very high throughput compared to the relational database with batch queries.
  • One less database to maintain (ScyllaDB was already used across Palo Alto Networks).

Cons

  • Required implementation of complex logic to identify correlations and build stories.
  • Complex architecture and deployment with data being sent to Kafka and the database in parallel.
  • Providing an isolated deployment for each client meant maintaining thousands of Kafka deployments. Even the smallest customer required two or three Kafka instances.

Implementation 3: NoSQL + Cloud-Managed Message Queue

This implementation is largely the same as the previous one. The only exception is that they replaced Kafka with a cloud-managed queue.

Implementation 3: NoSQL + Cloud-Managed Message Queue

Pros

  • Very high throughput compared to the relational database with batch queries.
  • One less database to maintain (ScyllaDB was already used across Palo Alto Networks).
  • No need to maintain Kafka deployments.

Cons

  • Required implementation of complex logic to identify correlations and build stories.
  • Much slower performance when compared to Kafka.

They quickly dismissed this approach because it was essentially the worst of both worlds: slow performance as well as high complexity.

Implementation 4: NoSQL (ScyllaDB), No Message Queue

Ultimately, the solution that worked best for them was ScyllaDB NoSQL without a message queue.

Implementation 4: NoSQL, No Message Queue

Like all the previous solutions, it starts with normalized data in canonical form ready for processing and then that data is split into hundreds of shards. However, now the records are sent to just one place: ScyllaDB. The partition key is shard-number, allowing different workers to work on different shards in parallel. insert_time is a timestamp with a certain resolution — say, up to 1 second. The clustering key is event id, and that’s used later to fetch dedicated events.

Belenky expanded, “We have our multiple consumers fetching records from ScyllaDB. They run a query that tells ScyllaDB, ‘Give me all the data that you have for this partition, for this shard, and with the given timestamp.’ ScyllaDB returns all the records to them, they compute the stories, and then they publish the stories for other parts or other components in the system to consume.”

Pros

  • Since ScyllaDB was already deployed across their organization, they didn’t need to add any new technologies to their ecosystem.
  • High throughput when compared to the relational database approach.
  • Comparable performance to the Kafka solution.
  • No need to add or maintain Kafka deployments.

Cons

  • Their code became more complex.
  • Producers and consumers must have synchronized clocks (up to a certain resolution).

Finally, let’s take an even deeper dive into how this solution works. The right side of this diagram shows Palo Alto Networks’ internal “worker” components that build the stories. When the worker components start, they query ScyllaDB. There’s a special table, called read_offsets, which is where each worker component stores its last offset (the last time stamp that it reached with its reading). ScyllaDB then returns the last state that it had for each shard. For example, for shard 1, the read_offset is 1,000. Shards 2 and 3 have different offsets.

Then the event producers run a query that inserts data, including the event id as well as the actual payload, into the appropriate shard on ScyllaDB.

Next, the workers (which are continuously running in an endless loop) take the data from ScyllaDB, compute stories and make the stories available to consumers.

When each of the workers is done computing a story, it commits the last read_offset to ScyllaDB.

When the next event arrives, it’s added to a ScyllaDB shard and processed by the workers… Then the cycle continues.

Final Results

What were their final results? Belenky summed up, “We’ve been able to reduce the operational cost by a lot, actually. We reduced the operational complexity because we didn’t add another system — we actually removed a system [Kafka] from our deployment. And we’ve been able to increase our performance, which translates to reduced operational costs.”

Image by Gerd Altmann from Pixabay 

Posted on

Fauna Transactional Database Introduces Event Streaming

Fauna Transactional Database Introduces Event Streaming

Fauna, the company behind the Fauna transactional database, recently announced the general availability of event streaming, a push-based stream that sends changes at both the document and collection levels to subscribed clients.

Shashank Golla, senior product marketing manager at Fauna, explains:

Fauna’s event streaming employs an open, push-based streaming method to automatically stream real-time data updates to your clients when there is a change in your database. Unlike polling, in event streaming the subscription from the client side happens once and changes are automatically broadcast to the client whenever the subscribed document or collection is updated.

Source: https://fauna.com/blog/event-streaming#ensure-clients-have-least-privilege-access-with-abac

Fauna supports two types of event streaming: document streaming, where the client subscribes to a document reference, and set streaming, where the client subscribes to a set reference and when one or more documents enter or leave the set an event notification is triggered.

A distributed database, Fauna is an object-relational, globally replicated service that supports an indexed-document data model and distributed ACID transactions. A subscription is a connection to the cloud service that is held open by the client through the Fauna driver and set and document streaming features are available using the C#, Go, JavaScript, JVM (Java, Scala) and Python drivers. Explaining how to integrate event streaming using a sample react application, Shadid Haque, developer advocate at Fauna, suggests:

Avoid running a query to fetch a document and then establishing a stream. Multiple events may have modified the document prior to stream startup, which can lead to an inaccurate representation of the document data in your application.

Event streaming databases have become popular in the last few years and the major cloud providers offer different managed options to stream data, including DynamoDB Streams and AWS Kinesis Datastream, Datastream on Google Cloud and Azure Event Hubs. CockroachDB and Astra DB support event-driven architectures using Change Data Capture (CDC). Jeremy Daly, GM of serverless cloud at Serverless Inc, comments in his latest newsletter:

If you are a database provider and you’re not drifting into the world of event-driven architecture, you might as well start looking for something else to do.

To ensure that clients have least privilege access, Golia suggests using ABAC, a Fauna’s extension of the traditional role-based access control:

With ABAC, you can implement least privilege access using streaming and provide real-time changes to only the users who should be receiving the updates.

The following limitations apply to Fauna event streaming: GraphQL subscriptions are currently not supported, a browser can open a maximum of 100 streams and a document stream reports only events for the fields and values within the document’s data field.

Event streaming is charged according to usage and is available in all Fauna pricing plans. Each streamed event counts two read operations and includes 4k bytes read from storage, plus one read operation per additional 4k bytes, per subscriber. One compute operation per subscriber is counted for every second a stream is held open.

Posted on

eMarketer Podcast: Where video streaming goes from here, watching major events, and Gen Z’s relationship with TV

eMarketer Podcast: Where video streaming goes from here, watching major events, and Gen Z's relationship with TV

Connected TV makes television advertising a whole lot easier. With precision targeting and accurate measurement, brands can drive performance and tap into TV’s impact and prestige. MNTN Performance TV makes it even easier—and more effective—with a self-serve, performance-driven marketing solution.

Get started today.