Hacker Newsnew | past | comments | ask | show | jobs | submit | markusthoemmes's commentslogin

Love it!


The serverless.com framework is built around existing serverless/faas platforms and allows you to write vendor-agnostic functions and deploy them exchangeably. It does not provide the resources to run your code. it abstracts away different API of different vendors to make them seem as one.

Knative on the other hand provides building blocks to eventually build a serverless platform. It provides the resources necessary to run and scale your application.

In short: You could use the serverless.com framework to deploy applications/functions on knative. But you still need some layer actually running your workload, like knative.


Even better! Knative will allow you to even scale your containers out-of-band if you know that something BIG will happen. See https://github.com/knative/serving/issues/1656 for the discussion.


That tells me not much....is it pinging to keep instances warm?


No. The intention is that the autoscaler will support both floor and ceiling values for its work. So while scale-to-zero is the default, you can make an economic decision that you want scale-to-1, scale-to-10, whatever makes sense for your case. Pinging won't be necessary to artificially create this behaviour.

This is an example of nesting reactive control (the autoscaler) with predictive control (the min/max values).


interesting...by floor and ceiling is that like the minimum and maximum threshold for latency?

here's my pain point. I built a serverless REST API with token authentication on Lambda. However, if many people aren't using it all the time it will sleep and then the next sucker who calls the endpoint is stuck with waiting for the serverless instance to wake up.

In some cases even getting a token from a simple POST request would take an awful long time. This was a few years ago and I stopped using serverless since then.

But now I'm interested in serverless because I've been hearing that the cold startup problem is being reduced.

I wonder if in the future developers will be just taking core logics from serverless repository and wiring up the components, sort of like how wordpress does it without the crazy layers of PHP and bloat.


There are two aspects here: engineering and economic.

Our engineering view is that we want startup to be as fast as possible, which is a surprisingly nuanced problem with lots of moving parts that need to collectively do something smart, even before you get to the startup time of your own code. This will show a lot of improvement as we go, but right now it's early days.

The economic question is about trading off the risk of hitting a slow start vs the cost of maintaining idle instances. It is impossible for Knative's contributors to solve that problem with a black box solution. What we can do is to provide you with some knobs and dials to express your preferences.

Edit: I didn't answer this question --

> interesting...by floor and ceiling is that like the minimum and maximum threshold for latency?

Not for now, this would be bounds on what scale the autoscaler can choose. Latency is an example of an output setpoint that an autoscaler could attempt to control, as opposed to a process input. We have in mind to make autoscaling somewhat pluggable because different people want to target different signals.


Some of the attributes of serverless workloads will require changes deeper down in the stack (read Kubernetes). For instance constantly getting containers up or throwing them away at high rates stresses parts of Kubernetes more than its used to. I fully expect lots of requirements to make their way into Kubernetes itself.


If AWS EKS gives you access to kubectl or similar, it is probably trivial to setup. There shouldn't be any obvious roadblocks. Will be a great contribution to add to the guides!


AWS EKS definitely does support kubectl, but requires the aws-iam-authenticator (formerly Heptio) to integrate with AWS IAM. Docs here: https://docs.aws.amazon.com/eks/latest/userguide/configure-k...


Joining in, Markus from IBM and OpenWhisk. Been involved with knative for quite some time now as well.

Happy to answer any questions that might arise as well. Answers might be biased and opinions are my own.

See IBM's statement: https://www.ibm.com/blogs/cloud-computing/2018/07/24/ibm-clo...


Can we see a design document/theory of operations, please? It’s difficult to evaluate this effort without understanding exactly what it is - namely, the constituent components, their respective functions, how they work and relate to one another, etc.


The docs repo gives a high level overview and a deeper dive into each of the components with samples:

https://github.com/knative/docs/blob/master/README.md


a bit off topic, i think this 'serverless' thing certainly will create lot of jobs, what a messy architecture, good for ibm, : )


@markusthoemmes could you kindly comment on how OpenWhisk<-->Knative interact or live together or side by side


This of course needs to be discussed in the broader OpenWhisk community first, but taking dewitt's answer here into perspective (https://news.ycombinator.com/item?id=17604185) and also based on what IBM has put out in their statement a possible interaction could be to layer OpenWhisk's higher level abstractions and multi-tenancy features over the knative stack and use the latter as an "execution-engine".

In technical terms, OpenWhisk's invoker system could be replaced by knative, keeping the API/Controller bits stable to still support the notion of actions/triggers/rules/sequences/compositions you name it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: