Opiniones sobre Canalizaciones de datos con velocidades de TPU

6468 opiniones

Subramanian R. · Se revisó hace alrededor de 3 años

Fatih K. · Se revisó hace alrededor de 3 años

Phil P. · Se revisó hace alrededor de 3 años

luis miguel p. · Se revisó hace alrededor de 3 años

muy bueno

Manuel C. · Se revisó hace alrededor de 3 años

Dimas M. · Se revisó hace alrededor de 3 años

Muhammad A. · Se revisó hace alrededor de 3 años

Emilien E. · Se revisó hace alrededor de 3 años

no actual TPU to use

运铎 张. · Se revisó hace alrededor de 3 años

Transcript 00:00 Lak: In order to build hybrid machine learning systems that work well both on-premises and in the cloud, your machine learning framework has to support three things: composability, portability, and scalability. 00:17 So let's take composability first. When people think about machine learning, they think about building a model, training a model. TensorFlow, PyTorch, NumPy, et cetera. But the reality is 95% of the time 00:33 is spent not building a model. It's all the other stuff. Each machine learning stage-- data analysis, training, model validation, monitoring-- these are all independent systems. Everyone has a different way to handle all these boxes. 00:52 And so when we say "composability," it's about the ability to compose a bunch of microservices together and the option to use what makes sense for your problem. But now that you've built your specific framework, 01:10 you want to move it around. And that's where we get into portability. The stack that you use is likely made up of all these components-- and probably lots more. 01:25 And all those microservices I detailed earlier only touch a small number of them. But you do it. You configure every stage in the stack, and it's finally running. What's this good for? 01:39 What happens next? Think about the machine learning workflow. Remember that you did all of this just so that you could develop the model. We'll call that "experimentation." But once you have the code running, 01:55 what do you need to do? That's right-- you need to train the model on the full dataset. You probably can't do it on the small setup on which you did all your initial development. 02:08 So you start up a training cluster, and you have to do it all over again. All the configuration, all the libraries, all the testing-- you've got to repeat it for the new environment. 02:23 And then, chances are you've got to do it once again to move it from on-premises to the cloud. Because remember, we said we want a hybrid environment. A machine learning model that maybe it helps you 02:37 train on the cloud and predict on the edge, or train on the cloud and predict on-premises. The point is that you have to configure the stack over and over again 02:47 for each environment that you need to support. Maybe at this point you're thinking, [scoffs] "That doesn't matter to me. "I never have to change environments. I'll only use one environment." 03:01 Wrong. So portability--it's essential. And then of course, you've got to do it again when your inputs change. Or your boss calls you and tells you to train faster by training on more machines. 03:17 You inevitably find that you have to change environments over and over again. Also, your laptop-- it counts as environment number one. And you don't do production services on your laptop. 03:31 So you need portability. So composability, portability, finally--scalability. You always hear about Kubernetes being able to scale. And that's true, but scalability in machine learning means so many more things. 03:50 Accelerators-- GPUs, TPUs, et cetera-- disks, skillsets-- software engineers, researchers, data engineers, data analysts, data scientists, different skillsets-- teams across the org-- because there's teams that are gonna be building the experiments, 04:09 teams that are gonna be using the experiments, teams that are gonna be monitoring the machine learning models. So accelerators, disks, skillsets, teams, experiments. So that's what we think of 04:21 when we think of machine learning in a hybrid cloud environment-- composability, portability, scalability.

Skandarajan R. · Se revisó hace alrededor de 3 años

Ananda L. · Se revisó hace alrededor de 3 años

Danila M. · Se revisó hace alrededor de 3 años

Sergej M. · Se revisó hace alrededor de 3 años

Vincenzo D. · Se revisó hace alrededor de 3 años

Stephanie C. · Se revisó hace alrededor de 3 años

ŁUKASZ M. · Se revisó hace alrededor de 3 años

Sravani K. · Se revisó hace alrededor de 3 años

Hugo M. · Se revisó hace alrededor de 3 años

Sergio R. · Se revisó hace alrededor de 3 años

Sergio R. · Se revisó hace alrededor de 3 años

Luca D. · Se revisó hace alrededor de 3 años

DANIEL LEONARDO U. · Se revisó hace alrededor de 3 años

Johan Stiwer P. · Se revisó hace alrededor de 3 años

Alexey K. · Se revisó hace alrededor de 3 años

No garantizamos que las opiniones publicadas provengan de consumidores que hayan comprado o utilizado los productos. Google no verifica las opiniones.