Keynote - Jyothi Pradan - CEO at KurlOn Enterprises
Keynote - Chris Fregly - Principal Developer Advocate, AI and Machine Learning at AWS
Keynote - Eric Weber - Head of Data Product & Experimentation at Yelp
Keynote - Harrison Tang - CEO at Spokeo
Keynote - Michael Swinson - Chief Data Scientist at Tatari
This talk will share the lessons learnt and best practices to be imbibed in order to have a scalable and robust data pipelines with serverless technologies. As the common phrase goes "there is no one size fits all", we have, through our own experience gained an understanding of when to go the serverless route. Factors like latency, data volumes, computation windows, nature of analytics being support, etc. determine the right architecture for every business usecase.
Introduction to Kubeflow from the perspective of data engineer with dive into how Kubeflow works behind the scenes. Alice traveled to the world of pods and higher-order functions and all the knowledge she acquired motivated her to become a Scala developer. Part of her work now includes helping out her fellow data scientists colleagues. And there's one task which only she can do with her knowledge of Kubernetes. Will Alice be able to gather her knowledge and deploy ML models with Kubeflow? You will find out in this talk.
We will build a production grade data pipeline live in-person in this talk by using Test Driven Development (TDD) and Pair Programming.
Investigating and rewriting Apache Spark jobs to avoid hitting spark limitations and improve data processing performance
Document oriented databases such as MongoDB and Cosmos DB let you store data in any shape and structure you want. The freedom from rigid schema may come at a cost though: performance, query flexibility, and resource costs may be negatively affected you understand data model implications.
Operating machine learning products requires trading off between the priorities of a data scientists' training environment with those of data engineers' production environment. In this session, come learn how to bridge the batch/streaming gap through a series of innovations in data processing.
At a time when UX engineers are focused on providing a clean user experience, the web is suddenly cluttered once again with distracting, annoying pop-ups. This presentation will explain the path our engineering team took to find a way to eliminate pop ups while maintaining regulatory compliance.
We live in a world of constant attacks on our PCs, Mobile devices, and Networks. In 2020 alone there were more than 300 million Ransom attacks. The hackers are targeting major aspects of our economy: Energy, Production, Healthcare and Communication. They are also going after small organizations: schools, medical centers & small municipalities that do not have the inhouse expertise to help protect against these attacks. We will look at the ways to secure the data. Backups are important but do not stop the attacks. The data needs to be protected from copying, erasing, and encrypting. This provides ultimate protection.
We infuse a novel user interface with the use of AI at every level. Data exploration is enabled with an orchestra of AI technologies, including ASR, MT, NLU, NER, sentiment analysis, and topic identification.
Quantum computing is rapidly gaining in capability. It promises to solve problems which would be impossible for a classical computer, including several machine learning applications. In this presentation I will summarize what quantum computing is and why it is so important. I will conclude with how you can leverage this for your enterprise's benefit.
Entering the new normal for billing, payment and collections post COVID-19 moratorium. People lost their jobs and could pay their bills and moratoriums put a pause on utilities being shutoff. Who do we turn on? How do we turn on the spigot without flooding the system?
Talk will focus on the importance of managing data streams as layers. Will use City of LA as basis for examples
-
Unleashing the serverless data pipelines9:30 am - 10:10 am
-
Alice in the world of machine learning10:15 am - 10:55 am
-
Sponsored11:00 am - 11:40 am
-
Lunch Break11:40 am - 12:40 pm
-
Building Production Data Pipelines by using Test Driven Development and Pair Programming12:40 pm - 1:20 pm
-
Spark challenges1:25 pm - 2:05 pm
-
Coffee Break2:05 pm - 2:20 pm
-
Schema Modeling Patterns and Best Practices for MongoDB9:30 am - 10:10 am
-
Overcoming data infrastructure limitations with a new paradigm for machine learning10:15 am - 10:55 am
-
Sponsored11:00 am - 11:40 am
-
Lunch Break11:40 am - 12:40 pm
-
Engineering a Consent Sandbox to Eliminate Annoying Pop-Ups and Dark Patterns12:40 pm - 1:20 pm
-
Is a Ransomware Attack in your Future? How to Prevent an Attack1:25 pm - 2:05 pm
-
Coffee Break2:05 pm - 2:20 pm
-
Using AI to redefine the User Experience9:30 am - 10:10 am
-
Quantum Computing: The next new technology in computing10:15 am - 10:55 am
-
Sponsored11:00 am - 11:40 am
-
Lunch Break11:40 am - 12:40 pm
-
A Step Towards Normalcy Beyond COVID-19 Utilities Moratorium Lift12:40 pm - 1:20 pm
-
A Layered Approach to Data Management1:25 pm - 2:05 pm
-
Coffee Break2:05 pm - 2:20 pm
Discussing the most common legal pitfalls AI/ML businesses experience and best practices to avoid them. Topics covered will include key issues that arise for businesses in legal areas such as intellectual property, contracts, data privacy, and liability.
I'll be talking about various ways companies and individuals deploy models to mobile apps, benefits and disadvantages of deploying offline, benefits and disadvantages of deploying online, and when you should deploy online or offline
Couchbase Analytics service helps users to analyze JSON data in near real-time without the need to Extract-Transform-Load your data into a separate system. Real-time analytics integrated within a business operation, processing current and historical data to prescribe actions in response to events helps achieve competitive advantage for your business
Improves Funding focus, -Bypassing systems created by the systemic racist social construct of our society to get real help needed to improve the trajectory of impoverished people's., -Why Data not being shared effects the lives of our poorest county men.-Activism through Data
Up to 40% of commuters on weekdays choose their trip mode on the day of travel (UK Ministry of Transport data). ETA is leading trials in Glasgow to see what it takes to encourage people away from private cars to low carbon transit, action travel or even not marking the trip.
Many organizations, enterprises and governments, have been releasing public or open datasets on a variety of topics such as Covid, environment and social justice. In this talk, I'll describe our efforts in making these datasets more easily accessible to the general public through natural language and conversational interfaces.
To intelligently design and optimally operate infrastructure assets, a combination of big data batch and streaming execution models are very essential before useful insights are generated. This presentation specifically focuses on how these models could be applied throughout the life cycle of a rail and power system infrastructure to maximize their value.
Learn how to consistently derive Big Business value from Big Data across diverse environments from small Startups to huge Enterprises. Start and end with GOAL: the Growth Opportunities and Actualization Landscape.
In an increasingly complex and regulated space, asset managers and financial institutions face a large data challenge. Impak, the leading impact rating agency, helps its clients face those challenges by providing them with positive and negative, social and environmental impact data to better identify investment opportunities, manage risks and report on the impact of their portfolios. We do this by using international standards such as the UN SDG’s and the Impact Management Project (IMP).
-
The importance of data sharing of low income community providers to University, ACLU and similar impact cultivator position organizations2:20 pm - 3:00 pm
-
Enjoy The Air3:05 pm - 3:45 pm
-
Coffee Break3:45 pm - 4:00 pm
-
Exposing Public Data through Natural-Language & Conversational Analytics4:00 pm - 4:40 pm
-
Leveraging big data to maximize value from rail and power infrastructure assets2:20 pm - 3:00 pm
-
Leveraging Big Data to Develop Big Business with GOAL: Growth Opportunities & Actualization Lifecycle3:05 pm - 3:45 pm
-
Coffee Break3:45 pm - 4:00 pm
-
Identifying opportunities and managing risks with impact and ESG data4:00 pm - 4:40 pm