<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.2">Jekyll</generator><link href="http://haralduebele.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="http://haralduebele.github.io/" rel="alternate" type="text/html" /><updated>2022-11-22T11:00:29+00:00</updated><id>http://haralduebele.github.io/feed.xml</id><title type="html">Harald Uebele’s Blog</title><subtitle>Harald's Blog</subtitle><author><name>Harald Uebele</name></author><entry><title type="html">Test your Github Pages content locally</title><link href="http://haralduebele.github.io/2021/02/15/Test-your-Github-Pages-content-locally/" rel="alternate" type="text/html" title="Test your Github Pages content locally" /><published>2021-02-15T00:00:00+00:00</published><updated>2021-02-15T00:00:00+00:00</updated><id>http://haralduebele.github.io/2021/02/15/Test%20your%20Github%20Pages%20content%20locally</id><content type="html" xml:base="http://haralduebele.github.io/2021/02/15/Test-your-Github-Pages-content-locally/">&lt;p&gt;I am using Github Pages, for this blog, and for some &lt;a href=&quot;https://harald-u.github.io/security-and-microservices/&quot; target=&quot;_blank&quot;&gt;workshops&lt;/a&gt;, tutorials, etc. Github Pages uses Jekyll to render the pages and there are instructions on how to setup Jekyll locally to test your content before publishing. I never managed to get them to work, I am not a Ruby expert and something was always missing.&lt;/p&gt;

&lt;p&gt;On the other hand, using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git commit &amp;amp;&amp;amp; git push&lt;/code&gt; approach is tedious because Github Pages can take some time before it starts rendering.&lt;/p&gt;

&lt;p&gt;I found the perfect solution, at least for me:&lt;/p&gt;

&lt;p&gt;Hans Kristian Flaatten (Starefossen) has created a Docker image to solve this problem, instructions are &lt;a href=&quot;https://github.com/Starefossen/docker-github-pages&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt; in his Github repo.&lt;/p&gt;

&lt;p&gt;You open a terminal session in the root directory of your local repo and start the Docker image like this:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;docker run &lt;span class=&quot;nt&quot;&gt;-it&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$PWD&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;:/usr/src/app &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;4000:4000&quot;&lt;/span&gt; starefossen/github-pages
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This mounts your current directory into the image and starts Jekyll. Your pages are served under &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://localhost:4000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2021/02/render-local.png&quot; alt=&quot;render locally&quot; /&gt;&lt;/p&gt;

&lt;p&gt;What’s really cool: you can keep editing your content and whenever you save it, the rendered pages are regenerated. So when you refresh your browser pointing to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://localhost:4000&lt;/code&gt; you immediately see the changes!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rendering your pages locally offers another useful option&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;I added a new directory in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_posts&lt;/code&gt; directory and called it &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unpublished&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2021/02/unpublished.png&quot; alt=&quot;unpublished&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I also added &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unpublished&lt;/code&gt; to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.gitignore&lt;/code&gt; file. That means everything in the unpublished directory is not pushed into the Github repository itself and hence not published in my official blog. But it is visible when rendered locally using Starefossen’s container image. To finally publish it, simply move it to the correct directory.&lt;/p&gt;</content><author><name>Harald Uebele</name></author><summary type="html">I am using Github Pages, for this blog, and for some workshops, tutorials, etc. Github Pages uses Jekyll to render the pages and there are instructions on how to setup Jekyll locally to test your content before publishing. I never managed to get them to work, I am not a Ruby expert and something was always missing.</summary></entry><entry><title type="html">Moving my Blog from Wordpress to Github Pages</title><link href="http://haralduebele.github.io/2021/02/10/Moving-my-Blog-from-Wordpress-to-Github-Pages/" rel="alternate" type="text/html" title="Moving my Blog from Wordpress to Github Pages" /><published>2021-02-10T00:00:00+00:00</published><updated>2022-01-05T00:00:00+00:00</updated><id>http://haralduebele.github.io/2021/02/10/Moving%20my%20Blog%20from%20Wordpress%20to%20Github%20Pages</id><content type="html" xml:base="http://haralduebele.github.io/2021/02/10/Moving-my-Blog-from-Wordpress-to-Github-Pages/">&lt;p&gt;While I was still working as a Developer Advocate at IBM, I have maintained a blog on Wordpress.com. Now that I retired, I don’t blog much. So I decided to let the Wordpress subscription expire by the end of this year, 2021. But I didn’t want to trash all I wrote so I started to play with Github Pages, Jekyll, and other tools. As you can see I have successfully moved my blog to Github Pages, now.&lt;/p&gt;

&lt;p style=&quot;color:gray;font-style: italic; font-size: 90%; text-align: center;&quot;&gt;&lt;img src=&quot;/images/2021/02/move-1015582_640.jpg&quot; alt=&quot;Moving&quot; /&gt;
Image by &lt;a href=&quot;https://pixabay.com/de/users/peggy_marco-1553824/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=1015582&quot;&gt;Peggy und Marco Lachmann-Anke&lt;/a&gt; on &lt;a href=&quot;https://pixabay.com/de/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=1015582&quot;&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have used Github Pages before to write the instructions for workshops but have always used one of the Github built-in themes. But they don’t work well for blogs. There are many other, Jekyll-based themes that can be used with Github Pages and work for blogs.&lt;/p&gt;

&lt;h4 id=&quot;1-prepare-a-github-repository&quot;&gt;1. Prepare a Github repository&lt;/h4&gt;

&lt;p&gt;First of all you need a Github public repository named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;yourgithubusername.github.io&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If the first part of the repository doesn’t exactly match your username, it won’t work, so make sure to get it right.&lt;/p&gt;

&lt;p&gt;The full URL of my repository is &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://github.com/haralduebele/haralduebele.github.io&lt;/code&gt; and Github Pages will serve its content on &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://haralduebele.github.io&lt;/code&gt;. This is called a user or organisation site.&lt;/p&gt;

&lt;h4 id=&quot;2-select-a-theme-for-github-pages&quot;&gt;2. Select a theme for Github Pages&lt;/h4&gt;

&lt;p&gt;The one I selected is called &lt;a href=&quot;https://github.com/amitmerchant1990/reverie&quot; target=&quot;_blank&quot;&gt;“Reverie”&lt;/a&gt;. I tried it, liked it, modified it and that is what you are looking at right now. The README has great setup instructions.&lt;/p&gt;

&lt;p&gt;You need to modify &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_config.yml&lt;/code&gt;, too, before you can see something meaningful.&lt;/p&gt;

&lt;p&gt;An important and not too obvious change is the permalink:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;permalink: /:year/:month/:day/:title/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This duplicates the URL format for blog posts from Wordpress.com.&lt;/p&gt;

&lt;p&gt;Once you commit and push your changes, it will take a moment and then you can view your new site.&lt;/p&gt;

&lt;h4 id=&quot;3-pack-your-crates&quot;&gt;3. Pack your crates&lt;/h4&gt;

&lt;p&gt;You can export your content on Wordpress.com under ‘Tools’ - ‘Export’.&lt;/p&gt;

&lt;p&gt;I choose to export all content and export the media library:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2021/02/wordpress-export.png&quot; alt=&quot;wp export&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Exported content will be a ZIP file with a XML document. The exported media library is a TAR file that contains the images, etc. sorted in folders by year and month.&lt;/p&gt;

&lt;p&gt;What do you do with the huge Wordpress XML? Somebody (Will Boyd, lonekorean)
already thought of that:&lt;/p&gt;

&lt;h4 id=&quot;4-convert-wordpress-xml-to-markdown&quot;&gt;4. Convert Wordpress XML to MarkDown&lt;/h4&gt;

&lt;p&gt;I found a pretty good tool &lt;a href=&quot;https://github.com/lonekorean/wordpress-export-to-markdown&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Using it is pretty straightforward using the instructions in the repository. It requires Node.js 12.14 or later.&lt;/p&gt;

&lt;p&gt;Unpack the Wordpress XML from the ZIP file into the root of this repository, run the script &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;node index.js&lt;/code&gt;, and answer the questions.&lt;/p&gt;

&lt;p&gt;I had it create folders for years and months. Output looks something like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2021/02/wordpress-convert.png&quot; alt=&quot;wp convert&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;index.md&lt;/code&gt; is the actual post. If there is an images folder, it will contain all the images the tool was able to grab or scrape from the XML.&lt;/p&gt;

&lt;h4 id=&quot;5-complete-the-conversion&quot;&gt;5. Complete the conversion&lt;/h4&gt;

&lt;p&gt;“wordpress-export-to-markdown” does a pretty good job but it does require moving files and some manual touch up to the blog posts.&lt;/p&gt;

&lt;h5 id=&quot;a-file-names&quot;&gt;a. File names&lt;/h5&gt;

&lt;p&gt;In Jekyll or Reverie respectively, blog entries go into the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_posts&lt;/code&gt; directory. They need to follow a specific name schema: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;yyyy-mm-dd-name.md&lt;/code&gt;. The conversion tool creates a name like this for the folders but not for the actual md files. They are all called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;index.md&lt;/code&gt;. So you need to rename the files before you copy them over to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_posts&lt;/code&gt; directory. I have created year directories under &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_posts&lt;/code&gt; to make them a little easier to organize.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2021/02/posts-directory.png&quot; alt=&quot;posts directory&quot; /&gt;&lt;/p&gt;

&lt;h5 id=&quot;b-images&quot;&gt;b. Images&lt;/h5&gt;

&lt;p&gt;The images from the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;images&lt;/code&gt; folders go to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;images&lt;/code&gt; folder in your new repo. I created year folders and month folders under the year folders to make it manageable. I believe that the XML files didn’t contain all images when I exported/converted. But you always have the media export that should contain all the images.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2021/02/images-directory.png&quot; alt=&quot;images directory&quot; /&gt;&lt;/p&gt;

&lt;h5 id=&quot;c-frontmatter&quot;&gt;c. Frontmatter&lt;/h5&gt;

&lt;p&gt;The exported index.md files contain frontmatter pulled from Wordpress:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;title&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Serverless&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;and&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Knative&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Part&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;1:&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Installing&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Knative&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;on&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;CodeReady&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Containers&quot;&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;date&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;2020-06-02&quot;&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;tags&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; 
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;knative&quot;&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;kubernetes&quot;&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;serverless&quot;&lt;/span&gt;
&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;But you must add some more. This is what I usually have there, e.g:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;layout&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;post&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;title&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Serverless&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;and&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Knative&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Part&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;1:&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Installing&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Knative&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;on&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;CodeReady&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Containers&quot;&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;date&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;2020-06-02&quot;&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;categories&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;Knative&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;Kubernetes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;Serverless&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;published&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;false&lt;/span&gt;
&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You need “layout: post” and you can add “categories” which will show up in the post and you can display all your blog entries sorted by categories.&lt;/p&gt;

&lt;p&gt;Change &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;published: false&lt;/code&gt; to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;published: true&lt;/code&gt; to make the post visible.&lt;/p&gt;

&lt;h5 id=&quot;d-image-links-and-subtitles&quot;&gt;d. Image links and subtitles&lt;/h5&gt;

&lt;p&gt;Image links should look like this:&lt;/p&gt;

&lt;div class=&quot;language-md highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;![&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;description&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;](&lt;/span&gt;&lt;span class=&quot;sx&quot;&gt;/images/yyyy/mm/imagename.ext&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This assumes that you also sort your images into years and months folders.&lt;/p&gt;

&lt;p&gt;On Wordpress I sometimes used subtitles under images. In the converted blog entries, the subtitles are simply text, which doesn’t really look good. I use a code block like this:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;The text is then centered, smaller, and in italics
{:style=&quot;color:gray;font-style:italic;font-size:90%;text-align:center;&quot;}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p style=&quot;color:gray;font-style:italic;font-size:90%;text-align:center;&quot;&gt;The text is then centered, smaller, and in italics&lt;/p&gt;

&lt;h5 id=&quot;e-open-external-links-in-new-windowstabs&quot;&gt;e. Open external links in new windows/tabs&lt;/h5&gt;

&lt;p&gt;Github markdown cannot do this at its own. But you can simply add the HTML code ({:target=”_blank”}) to the link:&lt;/p&gt;

&lt;div class=&quot;language-md highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;Link Text&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;](&lt;/span&gt;&lt;span class=&quot;sx&quot;&gt;https://url&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;{:target=&quot;_blank&quot;}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h5 id=&quot;f-syntax-highlighting&quot;&gt;f. Syntax highlighting&lt;/h5&gt;

&lt;p&gt;The Reverie theme uses Pygments/Dracula to highlight code in preformatted sections. I found this to be helpful, especially with quoted YAML.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;```sh
$ this would be shell commands
```

```yaml
and:
  this:
    - would:
        be: yaml
```
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h5 id=&quot;g-escape-characters&quot;&gt;g. Escape characters&lt;/h5&gt;

&lt;p&gt;Look out for backslashes &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;\&lt;/code&gt; and remove them, they are not needed.&lt;/p&gt;

&lt;h4 id=&quot;6-changes-to-the-theme&quot;&gt;6. Changes to the Theme&lt;/h4&gt;

&lt;p&gt;I made modifications to the theme, e.g. I changed the font family in style.scss to IBM Plex because that is my favorite font.&lt;/p&gt;

&lt;p&gt;I added “read time” to my posts based on this &lt;a href=&quot;https://int3ractive.com/blog/2018/jekyll-read-time-without-plugins/&quot; target=&quot;_blank&quot;&gt;article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Instead of the search page that is part of the Reverie theme I created an archive page that lists all my blogs sorted by year. This is based on Rafa Garrido’s answer in this &lt;a href=&quot;https://stackoverflow.com/questions/19086284/jekyll-liquid-templating-how-to-group-blog-posts-by-year&quot; target=&quot;_blank&quot;&gt;Stackoverflow question&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And some more stuff … you can go over your top once you figured out how Jekyll works.&lt;/p&gt;

&lt;h3 id=&quot;update-comments-section&quot;&gt;Update: Comments section&lt;/h3&gt;

&lt;p&gt;Github Pages uses Jekyll to create a static site. This means you can’t include logic which would be needed to add comments.&lt;/p&gt;

&lt;p&gt;I looked at &lt;a href=&quot;https://disqus.com/&quot; target=&quot;_blank&quot;&gt;Disqus&lt;/a&gt;, the Reverie theme I use is enabled for Disqus. It is an external service and the pages with Disqus added seem to get very heavy and heavily tracked, too.&lt;/p&gt;

&lt;p&gt;I read about the idea to use Github Issues to store the comments. I like this idea and looked at several examples. Then I found &lt;a href=&quot;https://utteranc.es/&quot; target=&quot;_blank&quot;&gt;utterances&lt;/a&gt;. It is a Github App that you install in your repository, you do a little configuration, add a piece of code to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;post.html&lt;/code&gt;. That’s it. It just works. And its Open Source, too. Data is stored as Github Issues, there is no tracking and no ads.  So this is what you see below.&lt;/p&gt;</content><author><name>Harald Uebele</name></author><summary type="html">While I was still working as a Developer Advocate at IBM, I have maintained a blog on Wordpress.com. Now that I retired, I don’t blog much. So I decided to let the Wordpress subscription expire by the end of this year, 2021. But I didn’t want to trash all I wrote so I started to play with Github Pages, Jekyll, and other tools. As you can see I have successfully moved my blog to Github Pages, now.</summary></entry><entry><title type="html">(Maybe) Learn something new about Minikube</title><link href="http://haralduebele.github.io/2021/02/08/maybe-learn-something-new-about-minikube/" rel="alternate" type="text/html" title="(Maybe) Learn something new about Minikube" /><published>2021-02-08T00:00:00+00:00</published><updated>2021-03-19T00:00:00+00:00</updated><id>http://haralduebele.github.io/2021/02/08/maybe%20learn%20something%20new%20about%20minikube</id><content type="html" xml:base="http://haralduebele.github.io/2021/02/08/maybe-learn-something-new-about-minikube/">&lt;p&gt;I wrote my first blog that involved &lt;a href=&quot;https://minikube.sigs.k8s.io/docs/&quot; target=&quot;_blank&quot;&gt;Minikube&lt;/a&gt; in February 2019. And I still use Minikube a lot.&lt;/p&gt;

&lt;p&gt;Recently I tried to figure out how to run Kubernetes exercises on a “memory challenged” notebook (8 GB RAM). I looked into alternatives, namely K3s (a small foorprint Kubernetes distribution) and K3d which uses K3s and runs it on top of Docker and not in a VM. That sounded like a solution to the memory challenge. K3d runs Docker in Docker: a worker node is a Docker container running on your workstation’s Docker instance. The worker node itself runs its own Docker and on this Docker instance all the Kubernetes deployments are running. This is totally cool on Linux since it eliminates the need for virtualization completely since Docker runs native on Linux. On Mac and Windows you use the virtualization that is part of Docker Desktop. So you need virtualization but it is perfectly integrated in your host operating system.
&lt;!--more--&gt;&lt;/p&gt;

&lt;p&gt;When I looked a little closer into the Minikube documentation I realized that Minikube can use Docker, too. So here is the first thing I learned new:&lt;/p&gt;

&lt;h3 id=&quot;1-minikube-docker-driver&quot;&gt;1. Minikube Docker Driver&lt;/h3&gt;

&lt;p&gt;The &lt;a href=&quot;https://minikube.sigs.k8s.io/docs/drivers/docker/&quot; target=&quot;_blank&quot;&gt;Docker driver&lt;/a&gt; became experimental somewhere around Minikube Version 1.8 in early 2020. It is now (Minikube Version 1.17) a preferred driver for Linux, macOS, and Windows.&lt;/p&gt;

&lt;p&gt;If you use Minikube a lot, at some point you may have set configuration options, e.g. for the driver. Check with:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;minikube config view
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For the Docker driver it will show:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;- driver: docker
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You may have a setting like:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;- vm-driver: virtualbox
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;That is ancient, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;vm-driver&lt;/code&gt; as a parameter has been deprecated for quite some time.&lt;/p&gt;

&lt;p&gt;Initial start of a Minikube cluster will take some time because it needs to download the Docker image but consecutive starts should be a lot faster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt; It seems that on Mac and Windows networking is different because of Docker Desktop:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;In many workshops I use NodePorts to access deployed applications. I use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;$(minikube ip)&lt;/code&gt; to determine the worker nodes IP address and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl get svc XXX --output 'jsonpath={.spec.ports[*].nodePort}&lt;/code&gt; to get the corresponding nodeport.&lt;/li&gt;
  &lt;li&gt;You should use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;minikube service XXX&lt;/code&gt; to access the service or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;minikube service XXX --url&lt;/code&gt; to get the URL, instead. This seems to use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;minikube tunnel&lt;/code&gt; (see below) under the covers to gain access to the service.&lt;/li&gt;
&lt;/ul&gt;

&lt;hr /&gt;

&lt;p&gt;I found more interesting features that I didn’t know before:&lt;/p&gt;

&lt;h3 id=&quot;2-minikube-service&quot;&gt;2. Minikube Service&lt;/h3&gt;

&lt;p&gt;The command &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;minikube service&lt;/code&gt; makes working with Kubernetes services a lot easier.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Get a list of all services. If the service is of type “NodePort”, display the URL:&lt;/p&gt;

    &lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; minikube service list
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;Result (e.g.):&lt;/p&gt;

    &lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; |----------------------|---------------------------|--------------|---------------------------|
 |      NAMESPACE       |           NAME            | TARGET PORT  |            URL            |
 |----------------------|---------------------------|--------------|---------------------------|
 | default              | kubernetes                | No node port |
 | default              | mysql                     |         3306 | http://192.168.49.2:32423 |
 | default              | todo                      |         3000 | http://192.168.49.2:30675 |
 | kube-system          | kube-dns                  | No node port |
 |----------------------|---------------------------|--------------|---------------------------|
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Restrict the list to one namespace:&lt;/p&gt;

    &lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; minikube service list &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; default
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Open a specific service in your default browser, e.g.&lt;/p&gt;

    &lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; minikube service todo
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;will open the URL for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;todo&lt;/code&gt; service in your browser.&lt;/p&gt;

    &lt;p&gt;&lt;em&gt;I don’t know what happens if call your service ‘list’, though :-)&lt;/em&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Get the URL of a specific service. Helpful in scripts:&lt;/p&gt;

    &lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; minikube service todo &lt;span class=&quot;nt&quot;&gt;--url&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;Result, e.g.:&lt;/p&gt;

    &lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; http://192.168.49.2:30675
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;3-minikube-tunnel&quot;&gt;3. Minikube Tunnel&lt;/h3&gt;

&lt;p&gt;If you use an Ingress on Minikube, for example Istio Ingress Gateway, you will have noticed that the corresponding service never gets an external IP address because that is simply not possible on Minikube.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get svc &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; istio-system
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                      AGE
istio-ingressgateway   LoadBalancer   10.106.56.168    &amp;lt;pending&amp;gt;     15021:32561/TCP,80:30169/TCP,443:30629/TCP,31400:30606/TCP,15443:32011/TCP   97m
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now, in a separate terminal session execute:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;minikube tunnel
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;minikube tunnel&lt;/code&gt; creates a network route on the host to the service using the cluster’s IP address as a gateway. The tunnel command exposes the external IP directly to any program running on the host operating system.&lt;/p&gt;

&lt;p&gt;Note: The command requires root rights (sudo) because it creates a network configuration.&lt;/p&gt;

&lt;p&gt;If you check the service now, the result will look similar to:&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                                                      AGE
istio-ingressgateway   LoadBalancer   10.106.56.168    10.106.56.168   15021:32561/TCP,80:30169/TCP,443:30629/TCP,31400:30606/TCP,15443:32011/TCP   4h37m
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The IP address 10.106.56.168 will be available on your workstation.&lt;/p&gt;

&lt;p&gt;You can then use services like nip.io or xip.io to create dummy DNS entries, like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;myingress.10.106.56.168.xip.io&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&quot;4-minikube-logviewer&quot;&gt;4. Minikube Logviewer&lt;/h3&gt;

&lt;p&gt;Central logging is important and helpful. But installing something like an ELK stack may be a bit overwhelming for Minikube. A while ago I found the &lt;a href=&quot;https://github.com/ivans3/minikube-log-viewer&quot; target=&quot;_blank&quot;&gt;Minikube Logviewer&lt;/a&gt; which is quite simple and doesn’t require a lot of resources.&lt;/p&gt;

&lt;p&gt;Now I found out that it is available as a Minikube addon, too. Enable it with&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;minikube addons &lt;span class=&quot;nb&quot;&gt;enable &lt;/span&gt;logviewer
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For some reason you need to restart your Minikube cluster but after that it works:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2021/02/minikube-logviewer.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I hope you will find this list helpful!&lt;/p&gt;</content><author><name>Harald Uebele</name></author><category term="Kubernetes" /><summary type="html">I wrote my first blog that involved Minikube in February 2019. And I still use Minikube a lot. Recently I tried to figure out how to run Kubernetes exercises on a “memory challenged” notebook (8 GB RAM). I looked into alternatives, namely K3s (a small foorprint Kubernetes distribution) and K3d which uses K3s and runs it on top of Docker and not in a VM. That sounded like a solution to the memory challenge. K3d runs Docker in Docker: a worker node is a Docker container running on your workstation’s Docker instance. The worker node itself runs its own Docker and on this Docker instance all the Kubernetes deployments are running. This is totally cool on Linux since it eliminates the need for virtualization completely since Docker runs native on Linux. On Mac and Windows you use the virtualization that is part of Docker Desktop. So you need virtualization but it is perfectly integrated in your host operating system.</summary></entry><entry><title type="html">Run your Code and Containers Serverless on IBM Cloud Code Engine</title><link href="http://haralduebele.github.io/2020/09/21/run-your-code-and-containers-serverless-on-ibm-cloud-code-engine/" rel="alternate" type="text/html" title="Run your Code and Containers Serverless on IBM Cloud Code Engine" /><published>2020-09-21T00:00:00+00:00</published><updated>2020-09-21T00:00:00+00:00</updated><id>http://haralduebele.github.io/2020/09/21/run-your-code-and-containers-serverless-on-ibm-cloud-code-engine</id><content type="html" xml:base="http://haralduebele.github.io/2020/09/21/run-your-code-and-containers-serverless-on-ibm-cloud-code-engine/">&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://cloud.ibm.com/docs/codeengine?topic=codeengine-about&quot; target=&quot;_blank&quot;&gt;IBM Cloud Code Engine&lt;/a&gt; is a fully managed, serverless platform that runs your containerized workloads, including web apps, micro-services, event-driven functions, or batch jobs. Code Engine even builds container images for you from your source code. Because these workloads are all hosted within the same Kubernetes infrastructure, all of them can seamlessly work together. The Code Engine experience is designed so that you can focus on writing code and not on the infrastructure that is needed to host it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I am a big fan of Kubernetes, it is a very powerful tool to manage containerized applications. But if you only want to run a small application without exactly knowing how much traffic it will generate then Kubernetes may be too big, too expensive, and too much effort. A serverless platform would most likely be better suited for this, for example Knative Serving. But it still requires Kubernetes. If you run a Knative instance on your own you probably don’t gain much. This is where something like IBM’s Code Engine comes to play: They run the (multi-tenant) environment, you use a little part of it and in the end pay only what you use. You don’t pay for any idle infrastructure. Code Engine is currently available as a Beta.&lt;/p&gt;

&lt;p&gt;Code Engine offers 3 different options:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Applications&lt;/li&gt;
  &lt;li&gt;Jobs&lt;/li&gt;
  &lt;li&gt;Container Builds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Applications and jobs are organized in “Projects” which are based on Kubernetes namespaces and act as a kind of folder. Apps and jobs within each folder can communicate over a private network with each other.&lt;/p&gt;

&lt;h4 id=&quot;run-your-code-as-an-application&quot;&gt;Run your code as an application&lt;/h4&gt;

&lt;p&gt;This is based on Knative Serving. A container image is deployed, it runs and accepts requests until it is terminated by the operator. An example would be a web application that users interact with or a microservice that receives requests from a user or from other microservices. Since it is based on Knative serving it allows scale-to-zero; no resources are used and hence no money is spent when nobody uses the service. If it receives a request, it spins up, serves the request, and goes dormant again after a time-out. If you allow for auto scaling, it spins up more instances if a huge number of requests come in. Knative Serving itself can do this but IBM’s Code Engine offers a nice web-based GUI for this. And some additional features that I describe later.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/09/image-3.png?w=1024&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h4 id=&quot;run-a-job&quot;&gt;Run a job&lt;/h4&gt;

&lt;p&gt;What is the difference between an app and a job? An app runs until it is terminated by an operator, and it can receive requests. A job doesn’t receive requests and it runs to completion, i.e. it runs until the task it has been started for is complete. This is not Knative Serving but Kubernetes knows &lt;a href=&quot;https://kubernetes.io/docs/concepts/workloads/controllers/job/&quot; target=&quot;_blank&quot;&gt;jobs&lt;/a&gt; and in the linked document is an example that computes π to 2000 places and prints it out. Which is a typical example for a job.&lt;/p&gt;

&lt;p&gt;This is how the job would look in Code Engine:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/09/image.png?w=1024&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;There is a Job Configuration, it specifies the container image (perl) and in the Pi example the command (perl) and the 3 arguments to compute π to 2000 places and print it.&lt;/p&gt;

&lt;p&gt;Submitting a “jobrun” creates a pod and in the pod’s log we will find π as:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;3.14159265358979323846264338327950288419716939937...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The Submit Job is interesting:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/09/image-1.png?w=522&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This is where a Code Engine job differs from Kubernetes: In this screenshot, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Array indices&lt;/code&gt; of “1-50” means that Code Engine will start 50 jobs numbered 1 through 50 using the same configuration. It doesn’t really make sense to calculate the number Pi 50 fifty times. (It should render the identical result 50 times, if not, something is seriously wrong.) But imagine a scenario like this: You have a huge sample of sensor data (or images, or voice samples, etc.) that you need to process to create a ML model. Instead of starting one huge job to process all, you could start 50 or 100 or even more smaller jobs that work on subsets of the data in an “&lt;a href=&quot;https://en.wikipedia.org/wiki/Embarrassingly_parallel&quot; target=&quot;_blank&quot;&gt;embarrassingly parallel&lt;/a&gt;” approach. The current limit is a maximum of 1000 job instances at the same time.&lt;/p&gt;

&lt;p&gt;Each of the pods for one of these jobs in an array gets an environment variable JOB_INDEX injected. You could then create an algorithm where each job is able to determine which subset of data to work on based on the index number. If one of the jobs fails, e.g. JOB_INDEX=17, you could restart a single job with just this single Array index instead of rerunning all of them.&lt;/p&gt;

&lt;h4 id=&quot;build-a-container-image&quot;&gt;Build a Container Image&lt;/h4&gt;

&lt;p&gt;Code Engine can build container images for you. There are 2 “build strategies”: Buildpack and Dockerfile:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Buildpack&lt;/strong&gt; (or “Cloud Native Buildpack”) is something you may know from Cloud Foundry or Heroku: the Buildpack inspects your code in a source repository, determines the language environment, and then creates a container image. This is of course limited to the supported languages and language enviroments, and it is based on a number of assumptions. So it will not always work but if it does it relieves developers from writing and maintaining Dockerfiles. The Buildpack strategy is based on &lt;a href=&quot;https://paketo.io/&quot; target=&quot;_blank&quot;&gt;Paketo&lt;/a&gt;, which is a Cloud Foundry project. Paketo in turn is based on Cloud Native Buildpacks which are maintained under &lt;a href=&quot;https://buildpacks.io/&quot; target=&quot;_blank&quot;&gt;Buildpacks.io&lt;/a&gt; and are a Cloud Native Computing Foundation (CNCF) sandbox project at the moment. &lt;a href=&quot;https://cloud.ibm.com/docs/codeengine?topic=codeengine-plan-build#build-strategy&quot; target=&quot;_blank&quot;&gt;Buildpacks&lt;/a&gt; are currently available for Go, Java, Node.js, PHP, and .NET Core. More will probably follow.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Dockerfile&lt;/strong&gt; strategy is straightforward: Specify your source repository and the name of the Dockerfile within, then start to create. It is based on &lt;a href=&quot;https://github.com/GoogleContainerTools/kaniko&quot; target=&quot;_blank&quot;&gt;Kaniko&lt;/a&gt; and builds the container image inside a container in the Kubernetes cluster. The Dockerfile strategy should always work, even when using Buildpack fails.&lt;/p&gt;

&lt;p&gt;The container images are stored in an image registry, this can be Docker Hub or the IBM Cloud Container Registry (ICR) or other registries, both public and private. You can safely store the credentials to access private image registries in Code Engine. These secrets can then be used to store images after being build or to retrieve images to deploy a Code Engine app or job.&lt;/p&gt;

&lt;p&gt;Of course, you don’t have to build your container images in Code Engine. You can use your existing DevOps toolchains to create the images and store them in a repository and Code Engine can pick them up from there. But its nice that you can build them in a simple and easy way with Code Engine.&lt;/p&gt;

&lt;h4 id=&quot;code-engine-cli&quot;&gt;Code Engine CLI&lt;/h4&gt;

&lt;p&gt;There is a &lt;a href=&quot;https://cloud.ibm.com/docs/codeengine?topic=codeengine-kn-install-cli&quot; target=&quot;_blank&quot;&gt;Code Engine plugin&lt;/a&gt; for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ibmcloud&lt;/code&gt; CLI. Currently the Code Engine (CE or ce) CLI has more functionality than the web based UI in the IBM Cloud dashboard. This will most likely change when Code Engine progresses during the Beta and when it becomes generally available later.&lt;/p&gt;

&lt;p&gt;You can use the CLI to retrieve the Kubernetes API configuration used by Code Engine. Once this has been done you can also use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt; and the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kn&lt;/code&gt; CLI, you do have only limited permissions in the Kubernetes cluster, though. I have made a quick test: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl apply -f service.yaml&lt;/code&gt; does work, it creates an app in Code Engine. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kn service list&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kn service describe hello&lt;/code&gt; also work. You ar enot limited to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ibmcloud&lt;/code&gt; CLI, then.&lt;/p&gt;

&lt;h4 id=&quot;networking&quot;&gt;Networking&lt;/h4&gt;

&lt;p&gt;Code Engine apps are assigned a URL in the form https://hello.abcdefgh-1234.us-south.codeengine.appdomain.cloud. They are accessible externally using HTTPS/TLS secured by a Let’s Encrypt certificate. If you deploy a workload with multiple services/apps, maybe only one of them needs to be accessed from the Internet, e.g. the backend-for-frontend. You can limit the networking of the other services to private Code Engine internal endpoints with the CLI:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;ibmcloud ce application create &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; myapp &lt;span class=&quot;nt&quot;&gt;--image&lt;/span&gt; ibmcom/hello &lt;span class=&quot;nt&quot;&gt;--cluster-local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This is the same you would do with a label in the YAML file of a Knative service.&lt;/p&gt;

&lt;p&gt;Code Engine jobs do not need this, they cannot be accessed externally by definition. Jobs can still make external requests, though. And &lt;a href=&quot;https://github.com/IBM/CodeEngine/tree/master/job&quot; target=&quot;_blank&quot;&gt;they can call Code Engine apps internally, there is an example&lt;/a&gt; in the Code Engine sample git repo at &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://github.com/IBM/CodeEngine&lt;/code&gt;.&lt;/p&gt;

&lt;h4 id=&quot;integrate-ibm-cloud-services&quot;&gt;Integrate IBM Cloud services&lt;/h4&gt;

&lt;p&gt;If you know Cloud Foundry on the IBM Cloud this should be familiar. IBM Cloud services like Cloud Object Storage, Cloudant database, the Watson services, etc. can be “bound” to a Cloud Foundry app. When the Cloud Foundry app is started, an environment variable VCAP_SERVICES is injected into the pod that holds a JSON object with the configuration (URLs, credentials, etc.) of the bound service/s. The application starting in the pod can then retrieve the configuration and configure access to the service/s. The developers of Code Engine have duplicated this method and in addition to the JSON object in VCAP_SERVICES they also inject individual environment variables for a service (for code that struggles with JSON like Bash scripts).&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://cloud.ibm.com/docs/codeengine?topic=codeengine-getting-started#app-hello&quot; target=&quot;_blank&quot;&gt;helloworld&lt;/a&gt; example displays the environment variables of the pod it is running in. If you &lt;a href=&quot;https://cloud.ibm.com/docs/codeengine?topic=codeengine-kn-service-binding&quot; target=&quot;_blank&quot;&gt;bind a IBM Cloud service&lt;/a&gt; to it, you can display the results with it:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/09/image-2.png?w=1024&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This binding of IBM Cloud services is really interesting for Code Engine jobs. Remember that you cannot connect to them and they can by themselves only write to the joblog. With this feature, you can bind for example a Cloud Object Storage (COS) service to the job, place your data into a COS bucket, run an array of jobs that pick “their” data based on their JOB_INDEX number, and when done, place the results back into the COS bucket.&lt;/p&gt;

&lt;p&gt;You may have guessed that under the covers, binding an IBM Cloud service to a Code Engine app or job creates a Kubernetes secret automatically.&lt;/p&gt;

&lt;h4 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h4&gt;

&lt;p&gt;Keep in mind that at the time of this writing IBM Cloud Code Engine has just started Beta (it was announced last week). It still has beta limitations, some functions are only available in the CLI, not in the Web UI, and during the Beta, price plans are not available yet. But it is already very promising, it is a very easy start for your small apps using serverless technologies. I am sure that there will be more features and functions in Code Engine as it progresses towards general availability.&lt;/p&gt;</content><author><name>Harald Uebele</name></author><category term="Kubernetes" /><category term="Knative" /><category term="Serverless" /><summary type="html">IBM Cloud Code Engine is a fully managed, serverless platform that runs your containerized workloads, including web apps, micro-services, event-driven functions, or batch jobs. Code Engine even builds container images for you from your source code. Because these workloads are all hosted within the same Kubernetes infrastructure, all of them can seamlessly work together. The Code Engine experience is designed so that you can focus on writing code and not on the infrastructure that is needed to host it.</summary></entry><entry><title type="html">Application Security from a Platform Perspective</title><link href="http://haralduebele.github.io/2020/09/03/application-security-from-a-platform-perspective/" rel="alternate" type="text/html" title="Application Security from a Platform Perspective" /><published>2020-09-03T00:00:00+00:00</published><updated>2020-09-03T00:00:00+00:00</updated><id>http://haralduebele.github.io/2020/09/03/application-security-from-a-platform-perspective</id><content type="html" xml:base="http://haralduebele.github.io/2020/09/03/application-security-from-a-platform-perspective/">&lt;p&gt;We have added an application security example to our pet project &lt;a href=&quot;https://github.com/IBM/cloud-native-starter/tree/master/security&quot; target=&quot;_blank&quot;&gt;Cloud Native Starter&lt;/a&gt;.&lt;/p&gt;

&lt;p style=&quot;color:gray;font-style: italic; font-size: 90%; text-align: center;&quot;&gt;&lt;img src=&quot;/images/2020/08/diagram.png?w=1024&quot; alt=&quot;Diagram&quot; /&gt;
Picture 1: Application Architecture&lt;/p&gt;

&lt;p&gt;The functionality of our sample is this:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A Web-App service serves a Vue.js/Javascript Web-App frontend application running in the browser of a client&lt;/li&gt;
  &lt;li&gt;This frontend redirects the user to the login page of Keycloak, an open source identity and access management (IAM) system&lt;/li&gt;
  &lt;li&gt;After successful login, the frontend obtains a JSON Web Token (JWT) from Keycloak&lt;/li&gt;
  &lt;li&gt;It requests a list of blog articles from the Web-API using the JWT&lt;/li&gt;
  &lt;li&gt;The Web-API in turn requests the article information from the Articles service, again using the JWT&lt;/li&gt;
  &lt;li&gt;The Web-API and Articles services use Keycloak to verify the validity of the JWT and authorize the requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My colleague Niklas Heidloff has blogged about the language specific application security aspects here:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;http://heidloff.net/article/security-quarkus-applications-keycloak/&quot; target=&quot;_blank&quot;&gt;Security in Quarkus Applications via Keycloak&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://heidloff.net/article/securing-vue-js-applications-keycloak/&quot; target=&quot;_blank&quot;&gt;Securing Vue.js Applications with Keycloak&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also created an app security workshop from it, the material is publicly available on &lt;a href=&quot;https://ibm-developer.gitbook.io/get-started-with-security-for-your-java-microservi/&quot; target=&quot;_blank&quot;&gt;Gitbook&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this article I want to talk about application security from the platform side. This is what we cover in the above mentioned workshop:&lt;/p&gt;

&lt;p style=&quot;color:gray;font-style: italic; font-size: 90%; text-align: center;&quot;&gt;&lt;img src=&quot;/images/2020/08/istiosecurityarchitecture.png?w=904&quot; alt=&quot;Istio Security Architecture&quot; /&gt;
Picture 2: Platform view of the Cloud Native Starter security sample&lt;/p&gt;

&lt;p&gt;There are two things that I want to write about:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Accessing the application externally using TLS (HTTPS, green arrow)&lt;/li&gt;
  &lt;li&gt;Internal Istio Service Mesh security using mutual TLS (mTLS, red-brown arrows)&lt;/li&gt;
&lt;/ol&gt;

&lt;h4 id=&quot;about-the-architecture&quot;&gt;About the architecture&lt;/h4&gt;

&lt;p&gt;This is a sample setup for a workshop with the main objective to make it as complete as possible while also keeping it as simple as possible. That’s why there are some “short cuts”:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Istio installation is performed with the demo profile.&lt;/li&gt;
  &lt;li&gt;Istio Pod auto-injection is enabled on the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;default&lt;/code&gt; namespace using the required annotation.&lt;/li&gt;
  &lt;li&gt;Web-App deployment in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;default&lt;/code&gt; namespace is part of the Istio service mesh although it doesn’t benefit a lot from it, there is no communication with other services in the mesh. But it allows us to use the Istio Ingress for TLS encrypted HTTPS access. In a production environment I would probably place Web-App outside the mesh, maybe even outside of Kubernetes, it is only a web server.&lt;/li&gt;
  &lt;li&gt;Keycloak is installed into the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;default&lt;/code&gt; namespace, too. It is an ‘ephemeral’ development install that consists only of a single pod without persistence. By placing it in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;default&lt;/code&gt; namespace it can be accessed by the Web-App frontend in the browser through the Istio Ingress using TLS/HTTPS which is definitely a requirement for an IAM – you do not want your authentication information flowing unencrypted through the Internet!&lt;br /&gt;
 Making it part of the Service Mesh itself automatically enables encryption in the communication with the Web-API and Articles services; both call Keycloak to verify the validity of the JWT token passed by the frontend.&lt;br /&gt;
 In a production setup, Keycloak would likely be installed in its own namespace. You could either make this namespace part of the Istio service mesh, too. Or you could &lt;a href=&quot;https://istio.io/latest/docs/tasks/traffic-management/egress/&quot; target=&quot;_blank&quot;&gt;configure the Istio Egress&lt;/a&gt; to enable outgoing calls from the Web-API and Articles services to a Keycloak service outside the mesh. Or maybe you even have an existing Keycloak instance running somewhere else. Then you would also use the Istio Egress to get access to it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We are using &lt;a href=&quot;https://www.keycloak.org/&quot; target=&quot;_blank&quot;&gt;Keycloak&lt;/a&gt; in our workshop setup, it is open source and widely used. Actually any OpenID Connect (OIDC) compliant IAM service should work. Another good exampe would be the &lt;a href=&quot;https://cloud.ibm.com/docs/appid?topic=appid-about&quot; target=&quot;_blank&quot;&gt;App ID service&lt;/a&gt; on IBM Cloud which has the advantage of being a managed service so you dan’t have to manage it.&lt;/p&gt;

&lt;h3 id=&quot;accessing-the-application-with-tls&quot;&gt;Accessing the application with TLS&lt;/h3&gt;

&lt;p&gt;In this example we are using Istio to help secure our application. We will use the Istio Ingress to route external traffic from the Web-App frontend into the application inside the service mesh.&lt;/p&gt;

&lt;p&gt;From a Kubernetes networking view, the Istio Ingress is a Kubernetes service of type LoadBalancer. It requires an external IP address to make it accessible from the Internet. And it will also need a DNS entry in order to be able to create a TLS certificate and to configure the Istio Ingress Gateway correctly.&lt;/p&gt;

&lt;p&gt;How you do that is dependent on your Kubernetes implementation and your Cloud provider. In our example we use the IBM Cloud and the IBM Cloud Kubernetes Service (IKS). For IKS the process of exposing the Istio Ingress with a DNS name and TLS is documented in &lt;a href=&quot;https://cloud.ibm.com/docs/containers?topic=containers-istio-mesh#tls&quot; target=&quot;_blank&quot;&gt;this article&lt;/a&gt; and &lt;a href=&quot;https://cloud.ibm.com/docs/containers?topic=containers-istio-mesh#istio_expose_bookinfo_tls&quot; target=&quot;_blank&quot;&gt;here based on the Istio Bookinfo sample&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The documentation is very good, I won’t repeat it here. But a little background may be required: When you issue the command to create a DNS entry for the load-balancer (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ibmcloud ks nlb-dns create ...&lt;/code&gt;), in the background this command also produces a Let’s Encrypt TLS certificate for this DNS entry and it stores this TLS certificate in a Kubernetes secret in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;default&lt;/code&gt; namespace. The Istio Ingress is running in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;istio-system&lt;/code&gt; namespace, it cannot access a secret in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;default&lt;/code&gt;. That is the reason for the intermediate step to export the secret with the certificate and recreate it in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;istio-system&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So how is storing a TLS certificate in a Kubernetes secret secure, it is only base64 encoded and not encrypted? That is true but there is are two possible solutions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Use a certificate management system like &lt;a href=&quot;https://cloud.ibm.com/docs/certificate-manager?topic=certificate-manager-about-certificate-manager&quot; target=&quot;_blank&quot;&gt;IBM Certificate Manager&lt;/a&gt;: Certificate Manager uses the Hardware Security Module (HSM)-based &lt;a href=&quot;https://cloud.ibm.com/docs/key-protect?topic=key-protect-getting-started-tutorial&quot; target=&quot;_blank&quot;&gt;IBM Key Protect service&lt;/a&gt; for storing root encryption keys. Those root encryption keys are used to wrap per-tenant data encryption keys, which are in turn used to encrypt per-certificate keys which are then stored securely within Certificate Manger databases.&lt;/li&gt;
  &lt;li&gt;Add a Key Management System (KMS) to the IKS cluster on the IBM Cloud. There is even a free option, &lt;a href=&quot;https://cloud.ibm.com/docs/key-protect?topic=key-protect-getting-started-tutorial&quot; target=&quot;_blank&quot;&gt;IBM Key Protect for IBM Cloud&lt;/a&gt;, or for the very security conscious there is the &lt;a href=&quot;https://cloud.ibm.com/docs/hs-crypto?topic=hs-crypto-get-started&quot; target=&quot;_blank&quot;&gt;IBM Hyper Protect Crypto Service&lt;/a&gt;. Both can be used to encrypt the etcd server of the Kubernetes API server and Kubernetes secrets. You would need to manage the TLS certificates yourself, though.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Or use both, the certificate management system to manage your TLS certificates and the KMS for the rest.&lt;/p&gt;

&lt;p&gt;We didn’t cover adding a certificate management system or a KMS in our workshop to keep it simple. But there is a huge documentation section on many aspects of &lt;a href=&quot;https://cloud.ibm.com/docs/containers?topic=containers-encryption&quot; target=&quot;_blank&quot;&gt;protecting sensitive information in your cluster&lt;/a&gt; on the IBM Cloud:&lt;/p&gt;

&lt;p style=&quot;color:gray;font-style: italic; font-size: 90%; text-align: center;&quot;&gt;&lt;img src=&quot;/images/2020/09/cs_encrypt_ov_kms.png&quot; alt=&quot;&quot; /&gt;
Picture 3 (c) IBM Corp.&lt;/p&gt;

&lt;h3 id=&quot;istio-security&quot;&gt;Istio Security&lt;/h3&gt;

&lt;p&gt;In my opinion, Istio is a very important and useful addition to Kubernetes when you work with Microservices architectures. It has features for traffic management, security, and observability. The Istio documentation has a very good section on &lt;a href=&quot;https://istio.io/latest/docs/concepts/security/&quot; target=&quot;_blank&quot;&gt;Istio security features&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In our example we set up Istio with “pod auto-injection” enabled for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;default&lt;/code&gt; namespace. This means that into every pod that is deployed into the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;default&lt;/code&gt; namespace, Istio deploys an additional container, the Envoy proxy. Istio then changes the routing information in the pod so that all other containers in the pod communicate with services in other pods only through this proxy. For example, when the Web-API service calls the REST API of the Articles service, the Web-API container in the Web-API pod connects to the Envoy proxy in the Web-API pod which makes the request to the Envoy proxy in the Articles pod which passes the request to the Articles container. Sounds complicated but it happens automagically.&lt;/p&gt;

&lt;p&gt;The Istio control plane contains a certificate authority (CA) that can manage keys and certificates. This Istio CA creates a X.509 certificate for every Envoy proxy and this certificate can be used for encryption and authentication in the service mesh.&lt;/p&gt;

&lt;p style=&quot;color:gray;font-style: italic; font-size: 90%; text-align: center;&quot;&gt;&lt;img src=&quot;/images/2020/09/istio-id-prov.png&quot; alt=&quot;&quot; /&gt;
Picture 4 (c) istio.io&lt;/p&gt;

&lt;p&gt;You can see in Picture 4 that each of our pods is running an Envoy sidecar and each sidecar holds a (X.509) certificate, including the Istio Ingress which is of course part of the service mesh, too.&lt;/p&gt;

&lt;p&gt;With the certificates in place in all the pods, all the communication in the service mesh is automatically encrypted using mutual TLS or mTLS. mTLS means that in the case of a client service (e.g. Web-API) calling a server service (e.g. Articles) both sides can verify the authenticity of the other side. When using “simple” TLS, only the client can verify the authenticity of the server, not vice versa.&lt;/p&gt;

&lt;p&gt;The Istio CA even performs automatic certificate and key rotation. Imagine what you would need to add to your code to implement this yourself!&lt;/p&gt;

&lt;p&gt;You still need to configure the &lt;a href=&quot;https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/&quot; target=&quot;_blank&quot;&gt;Istio Ingress Gateway&lt;/a&gt;. “Gateway” is an Istio configuration resource. This is what its definition looks like&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Gateway&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;default-gateway-ingress&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;default&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;selector&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
&lt;span class=&quot;err&quot;&gt;	&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;istio&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ingressgateway&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;servers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
&lt;span class=&quot;err&quot;&gt;	&lt;/span&gt;  &lt;span class=&quot;na&quot;&gt;number&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;443&lt;/span&gt;
&lt;span class=&quot;err&quot;&gt;	&lt;/span&gt;  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;https&lt;/span&gt;
&lt;span class=&quot;err&quot;&gt;	&lt;/span&gt;  &lt;span class=&quot;na&quot;&gt;protocol&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;HTTPS&lt;/span&gt;
&lt;span class=&quot;err&quot;&gt;	&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;tls&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
&lt;span class=&quot;err&quot;&gt;	&lt;/span&gt;  &lt;span class=&quot;na&quot;&gt;mode&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;SIMPLE&lt;/span&gt;
&lt;span class=&quot;err&quot;&gt;	&lt;/span&gt;  &lt;span class=&quot;na&quot;&gt;serverCertificate&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/istio/ingressgateway-certs/tls.crt&lt;/span&gt;
&lt;span class=&quot;err&quot;&gt;	&lt;/span&gt;  &lt;span class=&quot;na&quot;&gt;privateKey&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/istio/ingressgateway-certs/tls.key&lt;/span&gt;
&lt;span class=&quot;err&quot;&gt;	&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;hosts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
&lt;span class=&quot;err&quot;&gt;	&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;harald-uebele-k8s-1234567890-0001.eu-de.containers.appdomain.cloud&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This requires that you followed the instructions that I linked in the previous section “Accessing the application with TLS”. These instructions create the DNS hostname specified in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;hosts:&lt;/code&gt; variable and the TLS &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;privateKey&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;serverCertificate&lt;/code&gt; in the correct location.&lt;/p&gt;

&lt;p&gt;Now you can access the Istio Ingress using the DNS hostname and only (encrypted) HTTPS as protocol. HTTPS is terminated at the Istio Ingress which means the communication is decrypted there, the Ingress has the required keys to do so. The Istio Ingress is part of the Istio Service Mesh so all the communication between the Ingress and any other service in the mesh will be re-encrypted using mTLS. This happens automatically.&lt;/p&gt;

&lt;p&gt;We also need to define an Istio VirtualService for the Istio Ingress Gateway to configure the internal routes:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;VirtualService&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;virtualservice-ingress&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;hosts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;harald-uebele-k8s-1234567890-0001.eu-de.containers.appdomain.cloud&quot;&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;gateways&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;default-gateway-ingress&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;http&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;match&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;uri&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;prefix&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/auth&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;route&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;destination&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;number&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;8080&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;host&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;keycloak&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;match&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;uri&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;prefix&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/articles&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;route&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;destination&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;number&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;8081&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;host&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web-api&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;match&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;uri&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;prefix&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;route&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;destination&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;number&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;80&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;host&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web-app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The DNS hostname is specified in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;hosts:&lt;/code&gt; variable, again.&lt;/p&gt;

&lt;p&gt;There are 3 routing rules in this example:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://harald-uebele-k8s-1234567890-0001.eu-de.containers.appdomain.cloud/auth&lt;/code&gt; will route the request to the Keycloak service, port 8080. If you know Keycloak you will know that 8080 is the unencrypted port!&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://harald-uebele-k8s-1234567890-0001.eu-de.containers.appdomain.cloud/articles&lt;/code&gt; to the Web-API service, port 8081.&lt;/li&gt;
  &lt;li&gt;Calling &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://harald-uebele-k8s-1234567890-0001.eu-de.containers.appdomain.cloud&lt;/code&gt; without a path sends the request to Web-App service which basically is a Nginx webserver listending on port 80. Again: http only!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Is this secure? Yes, because all involved parties establish their service mesh internal communications via the Envoy proxies and those will encrypt traffic.&lt;/p&gt;

&lt;p&gt;Can it be more secure? Yes, because the Istio service mesh is using mTLS in “permissive” mode. So you can still access the services via unencrypted requests. This is done on purpose to allow you to migrate into a Istio service mesh without immediately breaking your application. In our example you could still call the Artictles service using its NodePort which effectively bypasses Istio security.&lt;/p&gt;

&lt;h4 id=&quot;switching-to-strict-mtls&quot;&gt;Switching to STRICT mTLS&lt;/h4&gt;

&lt;p&gt;STRICT means that mTLS is &lt;em&gt;enforced&lt;/em&gt; for communication in the Istio service mesh. No unencrypted and (X.509!) no unauthorized communication is possible. This eliminates pretty much the possibility of man-in-the-middle attacks.&lt;/p&gt;

&lt;p&gt;This requires a PeerAuthentication definition:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;security.istio.io/v1beta1&quot;&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;PeerAuthentication&quot;&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;default&quot;&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;default&quot;&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;mtls&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;mode&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;STRICT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The PeerAuthentication policy can be set mesh wide, for a namespace, or for a workload using a selector. In this example the policy is set for namespace &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;default&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once this definition is applied, only mTLS encrypted traffic is possible. You cannot access any service running inside the Istio service mesh by calling it on its NodePort. This also means that services running inside the service mesh can not call services outside without going through an Istio Egress Gateway.&lt;/p&gt;

&lt;p&gt;You can do even more with Istio &lt;em&gt;without changing a line of your code&lt;/em&gt;. The &lt;a href=&quot;https://istio.io/latest/docs/concepts/security/&quot; target=&quot;_blank&quot;&gt;Istio security concepts&lt;/a&gt; and &lt;a href=&quot;https://istio.io/latest/docs/tasks/security/&quot; target=&quot;_blank&quot;&gt;security tasks&lt;/a&gt; gives a good overview of what is possible.&lt;/p&gt;</content><author><name>Harald Uebele</name></author><category term="Kubernetes" /><category term="Istio" /><category term="Security" /><category term="OpenShift" /><summary type="html">We have added an application security example to our pet project Cloud Native Starter.</summary></entry><entry><title type="html">Knative Example: Deploying a Microservices Application</title><link href="http://haralduebele.github.io/2020/07/02/knative-example-deploying-a-microservices-application/" rel="alternate" type="text/html" title="Knative Example: Deploying a Microservices Application" /><published>2020-07-02T00:00:00+00:00</published><updated>2020-07-02T00:00:00+00:00</updated><id>http://haralduebele.github.io/2020/07/02/knative-example-deploying-a-microservices-application</id><content type="html" xml:base="http://haralduebele.github.io/2020/07/02/knative-example-deploying-a-microservices-application/">&lt;p&gt;I have written about Knative Installation, Knative Serving, and Knative Eventing. I have used the simple HelloWorld sample application which is perfectly fine to learn Knative. But I wanted to apply what I have learned with an example that is closer to reality. If you have followed my blog, you should know our pet project &lt;a href=&quot;https://github.com/IBM/cloud-native-starter&quot;&gt;Cloud Native Starter&lt;/a&gt;. It contains sample code that demonstrates how to get started with cloud-native applications and microservice based architectures.&lt;/p&gt;

&lt;p&gt;Cloud Native Starter is basically made up of 3 microservices: Web-API, Articles, and Authors. I have used it for an Istio hands-on workshop where one of the objectives is Traffic Management:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/cloudnativestarter-architecture.png?w=701&quot; alt=&quot;Cloud Native Starter&quot; /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A browser-based application requests a list of blog articles from the Web-API via the Istio Ingress.&lt;/li&gt;
  &lt;li&gt;The Web-API service retrieves a list of blog articles from the Articles services, and for every article it retrieves author details from the Authors service.&lt;/li&gt;
  &lt;li&gt;There are two versions of the Web-API service.&lt;/li&gt;
  &lt;li&gt;Container images for all services are available on my Docker Hub repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I think this is perfect to exercise my new Knative skills.&lt;/p&gt;

&lt;p&gt;For this example I wanted to give Minikube another try. In my first blog about Knative installation I had issues with Minikube together with Knative 0.12 which has specific instructions on how to install it on Minikube. I have now tested Minikube v1.11.0 with Knative Serving 0.15 and Kourier as networking layer using the &lt;a href=&quot;https://knative.dev/docs/install/any-kubernetes-cluster/&quot;&gt;default Knative 0.15 installation instructions&lt;/a&gt; and I am happy to report:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Knative Serving 0.15 works on Minikube!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here is the experience with Cloud Native Starter and Knative:&lt;/p&gt;

&lt;h2 id=&quot;microservice-1-authors&quot;&gt;Microservice 1: Authors&lt;/h2&gt;

&lt;p&gt;The simplest service is Authors, I started to deploy it with a simple Knative YAML file:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;authors&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;authors-v1&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;docker.io/haraldu/authors:1&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;DATABASE&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;local'&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;CLOUDANT\_URL&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The only additional configuration are the two environment variables, DATABASE and CLOUDANT_URL. With those the service could be configured to use an external Cloudant database to store the author information. With the settings above, authors information is stored in memory (local) only.&lt;/p&gt;

&lt;p&gt;When you deploy this on Minikube, it creates a Knative service&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kn service list
NAME       URL                                         LATEST        AGE     CONDITIONS   READY   REASON
authors    http://authors.default.example.com          authors-v1    12s     3 OK / 3     True    
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It shows that the service listens on the URL:&lt;/p&gt;

&lt;p&gt;http://authors.default.example.com&lt;/p&gt;

&lt;p&gt;This URL cannot be called directly, it is not resolvable via DNS unless you are able to configure your DNS server or use a local hosts file. With a “real” Kubernetes or OpenShift cluster with a real Ingress e.g. provisioned on the IBM Cloud these steps would not be necessary. To be able to call the API, we need the IP address of the Minikube “worker” node:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;minikube ip
192.168.39.169
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And here you can find the NodePort of the Kourier ingress:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get svc kourier &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; kourier-system
NAME      TYPE           CLUSTER-IP       EXTERNAL-IP   PORT&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;S&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;                      AGE
kourier   LoadBalancer   10.109.186.248   &amp;lt;pending&amp;gt;     80:30265/TCP,443:31749/TCP   4d1h
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The NodePort that serves HTTP is 30265. The correct Ingress IP address is therefore: 192.168.39.169:30265&lt;/p&gt;

&lt;p&gt;A REST API call to the Authors service using ‘curl’ is then build like this:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;curl &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'Host: authors.default.example.com'&lt;/span&gt; http://192.168.39.169:30265/api/v1/getauthor?name&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;Harald%20Uebele
&lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;name&quot;&lt;/span&gt;:&lt;span class=&quot;s2&quot;&gt;&quot;Harald Uebele&quot;&lt;/span&gt;,&lt;span class=&quot;s2&quot;&gt;&quot;twitter&quot;&lt;/span&gt;:&lt;span class=&quot;s2&quot;&gt;&quot;@harald_u&quot;&lt;/span&gt;,&lt;span class=&quot;s2&quot;&gt;&quot;blog&quot;&lt;/span&gt;:&lt;span class=&quot;s2&quot;&gt;&quot;https://haralduebele.blog&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In this way the Ingress gets the request with the correct host name in the request header.&lt;/p&gt;

&lt;p&gt;‘authors.default.example.com’ is an external URL. But the Authors service needs to be called internally only, it shouldn’t be exposed to the outside. A Knative service can be configured as ‘&lt;a href=&quot;https://knative.dev/docs/serving/cluster-local-route/&quot;&gt;private cluster-local&lt;/a&gt;’. This is done by tagging either the Knative service or the route:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl label kservice authors serving.knative.dev/visibility&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;cluster-local
service.serving.knative.dev/authors labeled
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Checking the Knative service again:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kn service list
NAME       URL                                         LATEST        AGE    CONDITIONS   READY   REASON  
authors    http://authors.default.svc.cluster.local    authors-v1    84m    3 OK / 3     True    
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The URL is now cluster-local. We can also accomplish that by adding an annotation to the YAML file. This saves one step but we are no longer able test the API in a simple manner with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curl&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;microservice-2-articles&quot;&gt;Microservice 2: Articles&lt;/h2&gt;

&lt;p&gt;The Articles Knative service definition is this:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ConfigMap&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;articles-config&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;data&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;samplescreation&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;CREATE&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;inmemory&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;USE\_IN\_MEMORY\_STORE&lt;/span&gt;
&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;articles&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;serving.knative.dev/visibility&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cluster-local&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;articles-v1&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;docker.io/haraldu/articles:1&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;8080&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;samplescreation&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;valueFrom&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;configMapKeyRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;articles-config&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;key&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;samplescreation&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;inmemory&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;valueFrom&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;configMapKeyRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;articles-config&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;key&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;inmemory&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;livenessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;\[&quot;sh&quot;, &quot;-c&quot;, &quot;curl -s http://localhost:8080/&quot;\]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;20&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;readinessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;\[&quot;sh&quot;, &quot;-c&quot;, &quot;curl -s http://localhost:8080/health | grep -q articles&quot;\]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;40&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Articles uses a ConfigMap which needs to be created, too.&lt;/p&gt;

&lt;p&gt;In the spec.containers section, environment variables are pulled from the ConfigMap and also liveness and readiness probes are defined. Articles is already tagged as ‘cluster-local’, it will only be callable from within the cluster.&lt;/p&gt;

&lt;p&gt;Deploy and check shows nothing unusual:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kn service list
NAME       URL                                         LATEST        AGE    CONDITIONS   READY   REASON
articles   http://articles.default.svc.cluster.local   articles-v1   53s    3 OK / 3     True    
authors    http://authors.default.svc.cluster.local    authors-v1    99m    3 OK / 3     True    
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Since Articles is cluster-internal, it can not be tested. You could use another container in the cluster that can be SSHed into, e.g. an otherwise empty Fedora container, and call the API from there. So I think the best practice during development is to tag the service cluster-only via command as explained in the Authors service section and not use the label in the YAML file. That way you can test the API using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curl&lt;/code&gt; via external URL and switch to cluster-only once you are confident that the service works as expected.&lt;/p&gt;

&lt;h2 id=&quot;microservice-3-web-api&quot;&gt;Microservice 3: Web-API&lt;/h2&gt;

&lt;p&gt;This is the service that caused the most trouble although the YAML to deploy it is quite simple:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web-api&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web-api-v1&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;docker.io/haraldu/web-api:1&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;9080&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;livenessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;\[&quot;sh&quot;, &quot;-c&quot;, &quot;curl -s http://localhost:9080/&quot;\]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;20&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;readinessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;\[&quot;sh&quot;, &quot;-c&quot;, &quot;curl -s http://localhost:9080/health | grep -q web-api&quot;\]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;40&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It uses readiness and liveness probes like Articles, both services are based on MicroProfile and this is to show the MicroProfile Health feature.&lt;/p&gt;

&lt;p&gt;This service must reachable from the outside, no tagging for cluster-local is therefore required.&lt;/p&gt;

&lt;p&gt;Deploy it and check for the URL:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kn service list
NAME       URL                                         LATEST        AGE    CONDITIONS   READY   REASON
articles   http://articles.default.svc.cluster.local   articles-v1   53s    3 OK / 3     True    
authors    http://authors.default.svc.cluster.local    authors-v1    99m    3 OK / 3     True    
web-api    http://web-api.default.example.com          web-api-v1a   4d1h   3 OK / 3     True    
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Test it with ‘curl’:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;curl &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'Host: web-api.default.example.com'&lt;/span&gt; http://192.168.39.169:30265/web-api/v1/getmultiple
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Nothing happens, the call seems to hang, it returns an empty object. The error log shows:&lt;/p&gt;

&lt;p&gt;[err] com.ibm.webapi.business.getArticles: Cannot connect to articles service&lt;/p&gt;

&lt;p&gt;What is wrong? Digging into the code reveals that Web-API issues REST requests to the wrong URL, e.g. for Articles:&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;static final String BASE_URL = &quot;http://articles:8080/articles/v1/&quot;;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Identical situation for Authors:&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;static final String BASE_URL = &quot;http://authors:3000/api/v1/&quot;;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The URLs are correct for Kubernetes, both services run in the same namespace and can be called by simply using their name. And they listen on different ports. For Knative they need to be changed to call &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://articles.default.svc.cluster.local/articles/v1/&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://authors.default.svc.cluster.local/api/v1/&lt;/code&gt;, both without port definition because Knative and its Ingress require fully qualified DNS names and expose HTTP on port 80. I have changed the code, recompiled the two versions of Web-API and created Container Images on Docker Hub: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker.io/haraldu/web-api:knative-v1&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker.io/haraldu/web-api:knative-v2&lt;/code&gt; (which we need later).&lt;/p&gt;

&lt;p&gt;Testing with ‘curl’ still gives no result, but checking of the pods shows why:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl get pod
NAME READY STATUS RESTARTS AGE
articles-v1-deployment-5ddf9869c7-rslv5 0/2 Running 0 22s
web-api-v1-deployment-ff547b857-pc5ms 2/2 Running 0 2m8s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Articles has been scaled to zero and it is still in the process of starting (READY: 0/2). It is a traditional Java app and takes some time to start. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;initialDelaySeconds&lt;/code&gt; parameters for liveness and readiness probes add some additional delay. Authors has been scaled to zero, too, but as a Node.js app it starts quickly. For Java based microservices that are supposed to be deployed on Knative, Quarkus is definitely a better choice as it &lt;a href=&quot;http://heidloff.net/article/serverless-quarkus-kubernetes-java-knative/&quot;&gt;reduces startup time&lt;/a&gt; dramatically.&lt;/p&gt;

&lt;h2 id=&quot;disable-scale-to-zero&quot;&gt;Disable Scale-to-Zero&lt;/h2&gt;

&lt;p&gt;This is the modified YAML for Articles, it includes the cluster-local label and the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;minScale: &quot;1&quot;&lt;/code&gt; that prevents scale to zero:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ConfigMap&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;articles-config&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;data&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;samplescreation&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;CREATE&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;inmemory&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;USE\_IN\_MEMORY\_STORE&lt;/span&gt;
&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;articles&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;serving.knative.dev/visibility&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cluster-local&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;articles-v1&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;annotations&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;autoscaling.knative.dev/minScale&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;1&quot;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;docker.io/haraldu/articles:1&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;8080&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;samplescreation&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;valueFrom&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;configMapKeyRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;articles-config&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;key&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;samplescreation&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;inmemory&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;valueFrom&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;configMapKeyRef&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;articles-config&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;key&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;inmemory&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;livenessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;\[&quot;sh&quot;, &quot;-c&quot;, &quot;curl -s http://localhost:8080/&quot;\]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;20&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;readinessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;\[&quot;sh&quot;, &quot;-c&quot;, &quot;curl -s http://localhost:8080/health | grep -q articles&quot;\]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;40&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And here is the one for Web-API (v1):&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web-api&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web-api-v1&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;annotations&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;autoscaling.knative.dev/minScale&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;1&quot;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;docker.io/haraldu/web-api:knative-v1&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;9080&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;livenessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;\[&quot;sh&quot;, &quot;-c&quot;, &quot;curl -s http://localhost:9080/&quot;\]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;20&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;readinessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;\[&quot;sh&quot;, &quot;-c&quot;, &quot;curl -s http://localhost:9080/health | grep -q web-api&quot;\]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;40&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;canary-testing&quot;&gt;Canary Testing&lt;/h2&gt;

&lt;p&gt;In the architecture diagram at the very beginning of this article you can see two versions of Web-API. Their difference is: Version 1 displays a list of 5 articles, Version 2 displays 10 articles. If you deploy a new version of a microservice you will most likely want to test it first, maybe as a canary deployment on a subset of users using Traffic Management.&lt;/p&gt;

&lt;p&gt;This is how you define it:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web-api&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web-api-v2&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;annotations&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;autoscaling.knative.dev/minScale&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;1&quot;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;docker.io/haraldu/web-api:knative-v2&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;9080&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;livenessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;\[&quot;sh&quot;, &quot;-c&quot;, &quot;curl -s http://localhost:9080/&quot;\]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;20&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;readinessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;\[&quot;sh&quot;, &quot;-c&quot;, &quot;curl -s http://localhost:9080/health | grep -q web-api&quot;\]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;40&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;traffic&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;tag&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;revisionName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web-api-v1&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;percent&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;75&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;tag&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v2&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;revisionName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;web-api-v2&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;percent&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;25&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In the image section, the knative-v2 Container image is referenced.&lt;/p&gt;

&lt;p&gt;The traffic sections performs a 75% / 25% split between Version 1 and Version 2. If you know Istio you will know where this function comes from. You will also know how much needs to be configured to enable traffic management with Istio: VirtualService, DestinationRule, and entries to the Ingress Gateway configuration.&lt;/p&gt;

&lt;h2 id=&quot;conclusion-and-further-information&quot;&gt;Conclusion and further information&lt;/h2&gt;

&lt;p&gt;This was the description of an almost “real life” microservices example on Knative. You have seen that with typical Java based microservices with their long start-up times the serverless scale-to-zero pattern doesn’t work. If you want to use Java together with scale-to-zero, you need to utilize recent developments in Java like Quarkus with its impressively fast start-up.&lt;/p&gt;

&lt;p&gt;So is Knative worth the effort and resources? I am not sure about Knative Eventing. But Knative Serving with its easier deployment files and the easy implementation of auto-scaling and traffic management are definitely worth a try. But keep in mind that Knative is not well suited for every workload that you would deploy on Kubernetes.&lt;/p&gt;

&lt;p&gt;Additional reading:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Knative documentation, &lt;a href=&quot;https://knative.dev/docs&quot;&gt;https://knative.dev/docs&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Red Hat Knative Tutorial, &lt;a href=&quot;https://redhat-developer-demos.github.io/knative-tutorial&quot;&gt;https://redhat-developer-demos.github.io/knative-tutorial&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Deploying serverless apps with Knative, &lt;a href=&quot;https://cloud.ibm.com/docs/containers?topic=containers-serverless-apps-knative&quot;&gt;https://cloud.ibm.com/docs/containers?topic=containers-serverless-apps-knative&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;</content><author><name>Harald Uebele</name></author><category term="Knative" /><category term="Kubernetes" /><category term="Serverless" /><category term="Microservices" /><summary type="html">I have written about Knative Installation, Knative Serving, and Knative Eventing. I have used the simple HelloWorld sample application which is perfectly fine to learn Knative. But I wanted to apply what I have learned with an example that is closer to reality. If you have followed my blog, you should know our pet project Cloud Native Starter. It contains sample code that demonstrates how to get started with cloud-native applications and microservice based architectures.</summary></entry><entry><title type="html">Serverless and Knative - Part 3: Knative Eventing</title><link href="http://haralduebele.github.io/2020/06/10/serverless-and-knative-part-3-knative-eventing/" rel="alternate" type="text/html" title="Serverless and Knative - Part 3: Knative Eventing" /><published>2020-06-10T00:00:00+00:00</published><updated>2020-06-10T00:00:00+00:00</updated><id>http://haralduebele.github.io/2020/06/10/serverless-and-knative-part-3-knative-eventing</id><content type="html" xml:base="http://haralduebele.github.io/2020/06/10/serverless-and-knative-part-3-knative-eventing/">&lt;p&gt;This is part 3 of my blog series about Serverless and Knative. I covered &lt;a href=&quot;https://haralduebele.github.io/2020/06/02/serverless-and-knative-part-1-installing-knative-on-codeready-containers/&quot; target=&quot;_blank&quot;&gt;Installing Knative on CodeReady Containers&lt;/a&gt; in part 1 and &lt;a href=&quot;https://haralduebele.github.io/2020/06/03/serverless-and-knative-part-2-knative-serving/&quot; target=&quot;_blank&quot;&gt;Knative Serving&lt;/a&gt; in part 2.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/m5EQknfW_400x400.jpg&quot; alt=&quot;Knative logo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Knative Eventing allows to pass events from an event producer to an event consumer. Knative events follow the &lt;a href=&quot;https://github.com/cloudevents/spec/blob/master/spec.md&quot; target=&quot;_blank&quot;&gt;CloudEvents&lt;/a&gt; specification.&lt;/p&gt;

&lt;p&gt;Event producers can be anything:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;“Ping” jobs that periodically send an event&lt;/li&gt;
  &lt;li&gt;Apache CouchDB sending an event when a record is written, changed, or deleted&lt;/li&gt;
  &lt;li&gt;Kafka Message Broker&lt;/li&gt;
  &lt;li&gt;Github repository&lt;/li&gt;
  &lt;li&gt;Kubernetes API Server emitting cluster events&lt;/li&gt;
  &lt;li&gt;and &lt;a href=&quot;https://knative.dev/docs/eventing/sources/&quot; target=&quot;_blank&quot;&gt;many more&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An event consumer is any type of code running on Kubernetes (typically) that is callable. It can be a “classic” Kubernetes deployment and service, and of course in can be a Knative Service.&lt;/p&gt;

&lt;p&gt;A good source to learn Knative eventing is the &lt;a href=&quot;https://knative.dev/docs/eventing/&quot; target=&quot;_blank&quot;&gt;Knative documentation&lt;/a&gt; itself and the &lt;a href=&quot;https://redhat-developer-demos.github.io/knative-tutorial/knative-tutorial-eventing/index.html&quot; target=&quot;_blank&quot;&gt;Red Hat Knative Tutorial.&lt;/a&gt; I think, the Red Hat tutorial is better structured and more readable.&lt;/p&gt;

&lt;p&gt;There are three usage patterns for Knative Eventing, the first one being the simplest:&lt;/p&gt;

&lt;h3 id=&quot;source-to-sink&quot;&gt;Source to Sink&lt;/h3&gt;

&lt;p&gt;In this case, the source sends a message to a sink, there is no queuing or filtering, it is a one-to-one relationship.&lt;/p&gt;

&lt;p style=&quot;color:gray;font-style: italic; font-size: 90%; text-align: center;&quot;&gt;&lt;img src=&quot;/images/2020/06/source-sink.png&quot; alt=&quot;Source to Sink&quot; /&gt;
(c) Red Hat, Inc.&lt;/p&gt;

&lt;p&gt;Knative Event Sources are Knative objects. The following sources are installed when Knative is installed:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl api-resources &lt;span class=&quot;nt&quot;&gt;--api-group&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'sources.knative.dev'&lt;/span&gt;
NAME               SHORTNAMES   APIGROUP              NAMESPACED   KIND
apiserversources                sources.knative.dev   &lt;span class=&quot;nb&quot;&gt;true         &lt;/span&gt;ApiServerSource
pingsources                     sources.knative.dev   &lt;span class=&quot;nb&quot;&gt;true         &lt;/span&gt;PingSource
sinkbindings                    sources.knative.dev   &lt;span class=&quot;nb&quot;&gt;true         &lt;/span&gt;SinkBinding
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;There are many more sources, e.g. a Kafka Source or a CouchDB Source, but they need to be installed separately. To get a basic understanding of Knative eventing, the PingSource is sufficient. It creates something comparable to a cron job on Linux that periodically emits a message.&lt;/p&gt;

&lt;p&gt;The Source links to the Sink so it is best to define/deploy the Sink first. It is a simple Knative Service, the code snippets are all from the Red Hat Knative Tutorial:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;eventinghello&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;eventinghello-v1&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;quay.io/rhdevelopers/eventinghello:0.0.2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And this is the Source definition:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;sources.knative.dev/v1alpha2&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;PingSource&lt;/span&gt; 
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;eventinghello-ping-source&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; 
  &lt;span class=&quot;na&quot;&gt;schedule&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*/2&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;*&quot;&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;jsonData&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;{&quot;key&quot;:&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;every&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;mins&quot;}'&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;sink&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;ref&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;eventinghello&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;ul&gt;
  &lt;li&gt;PingSource is one of the default Knative Sources.&lt;/li&gt;
  &lt;li&gt;The Schedule is typical cron, it defines that the “ping” happens every 2 minutes.&lt;/li&gt;
  &lt;li&gt;jsonData is the (fixed) message that is transmitted.&lt;/li&gt;
  &lt;li&gt;sink defines the Knative Service that the Source connects to: eventinghello.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When both elements are deployed we can see that an eventinghello pod is started every two minutes, in its log we can see the message ‘{“key”: “every 2 mins”}’. The pod itself terminates after about 60 to 70 seconds (Knative scale to zero) and another pod is started after the 2 minutes interval of the PingSource are over and the next message is sent.&lt;/p&gt;

&lt;p&gt;To recap the Source-to-Sink pattern: it connects an event source with an event sink in a one-to-one relation. In my opinion it is a starting point to understand Knative Eventing terminology but it would be an incredible waste of resources if this were the only available pattern. The next pattern is:&lt;/p&gt;

&lt;h3 id=&quot;channel-and-subscription&quot;&gt;Channel and Subscription&lt;/h3&gt;

&lt;p&gt;A Knative Channel is a custom resource that can persist events and allows to forward events to multiple destinations (via subscriptions). There are multiple channel implementations: InMemoryChannel, KafkaChannel, &lt;a href=&quot;https://nats.io/&quot; target=&quot;_blank&quot;&gt;NATS&lt;/a&gt; Channel, etc.&lt;/p&gt;

&lt;p&gt;By default all Knative Channels in a Kubernetes cluster use the InMemoryChannel implementation. The Knative documentation describes InMemoryChannels as “a best effort Channel. &lt;strong&gt;They should NOT be used in Production.&lt;/strong&gt; They are useful for development.” Characteristics are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;No Persistence&lt;/strong&gt;: When a Pod goes down, messages go with it.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;No Ordering Guarantee&lt;/strong&gt;: There is nothing enforcing an ordering, so two messages that arrive at the same time may go to subscribers in any order. Different downstream subscribers may see different orders.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;No Redelivery Attempts&lt;/strong&gt;: When a subscriber rejects a message, there is no attempts to retry sending it.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Dead Letter Sink&lt;/strong&gt;: When a subscriber rejects a message, this message is sent to the dead letter sink, if present, otherwise it is dropped.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A lot of restrictions but it is much easier to set up compared to the KafkaChannel where you need to create a Kafka Server first.&lt;/p&gt;

&lt;p&gt;Knative Eventing is very configurable here: you can change the cluster wide Channel default and you can change the Channel implementation per namespace. For example you can keep InMemoryChannel as the cluster default but use KafkaChannel in one or two projects (namespaces) with much higher requirements for availability and message delivery.&lt;/p&gt;

&lt;p&gt;A Knative Subscription connects (= subscribes) a Sink service to a Channel. Each Sink service needs its own Subscription to a Channel.&lt;/p&gt;

&lt;p&gt;Coming from the Source to Sink pattern in the previous section, the Source to Sink relation is now replaced with a Source to Channel relation. One or multiple Sink services subscribe to the Channel:&lt;/p&gt;

&lt;p style=&quot;color:gray;font-style: italic; font-size: 90%; text-align: center;&quot;&gt;&lt;img src=&quot;/images/2020/06/channels-subs.png&quot; alt=&quot;Channels and Subscriptions&quot; /&gt;
(c) Red Hat, Inc.&lt;/p&gt;

&lt;p&gt;The Channel and Subscription pattern decouples the event producer (Source) from the event consumer (Sink) and allows for a one to many relation between Source and Sink. Every message / event emitted by the Source is forwarded to one or many Sinks that are subscribed to the Channel.&lt;/p&gt;

&lt;h3 id=&quot;brokers-and-triggers&quot;&gt;Brokers and Triggers&lt;/h3&gt;

&lt;p&gt;The Broker and Trigger pattern extends the Channel and Subscription pattern and is the most interesting scenario. Therefore I won’t go into more detail here but the Red Hat Knative Tutorial has an &lt;a href=&quot;https://redhat-developer-demos.github.io/knative-tutorial/knative-tutorial-eventing/channel-and-subscribers.html&quot; target=&quot;_blank&quot;&gt;example for Channel and Subscriber&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A Broker is a Knative custom resource that is composed of at least two distinct objects, an ingress and a filter. Events are sent to the Broker ingress, the filter strips all metadata from the event data that is not part of the CloudEvent. Brokers typically use Knative Channels to deliver the events.&lt;/p&gt;

&lt;p&gt;This is the definition of a Knative Broker:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;eventing.knative.dev/v1beta1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Broker&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;default&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Configuration specific to this broker.&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ConfigMap&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;config-br-default-channel&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;namespace&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;knative-eventing&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;A Trigger is very similar to a Subscription, it subscribes to events from a specific Broker but the most interesting aspect is that it allows filtering on specific events based on their CloudEvent attributes:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;eventing.knative.dev/v1beta1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Trigger&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;my-service-trigger&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;broker&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;default&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;filter&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;attributes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;type&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;dev.knative.foo.bar&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;myextension&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;my-extension-value&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;subscriber&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;ref&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;my-service&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I think this is were Knative Eventing gets interesting. Why would you install an overhead of resources (called Knative Eventing) into your Kubernetes cluster to simply send a message / event from one pod to another? But with an event broker that receives a multitude of different events and triggers that filter out a specific event and route that to a specific (micro) service I can see an advantage.&lt;/p&gt;

&lt;p style=&quot;color:gray;font-style: italic; font-size: 90%; text-align: center;&quot;&gt;&lt;img src=&quot;/images/2020/06/brokers-triggers.png&quot; alt=&quot;Brokers and Triggers&quot; /&gt;
(c) Red Hat, Inc.&lt;/p&gt;

&lt;p&gt;This is the slightly modified example from the &lt;a href=&quot;https://redhat-developer-demos.github.io/knative-tutorial/knative-tutorial-eventing/eventing-trigger-broker.html&quot; target=&quot;_blank&quot;&gt;Red Hat Knative Tutorial&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;To create a default broker requires no YAML. To use the default Broker for a Kubernetes namespace just add a label:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl label namespace knativetutorial knative-eventing-injection&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;enabled
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This will automatically create the required resources. To check:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl get broker
NAME      READY   REASON   URL                                                       AGE
default   True             http://default-broker.knativetutorial.svc.cluster.local   3d19h

&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl get channel
NAME                                                        READY   REASON   URL                                                                       AGE
inmemorychannel.messaging.knative.dev/default-kne-trigger   True             http://default-kne-trigger-kn-channel.knativetutorial.svc.cluster.local   3d19h
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The first command shows the “default” broker is ready and listens to the URL &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://default-broker.knativetutorial.svc.cluster.local&lt;/code&gt;. The second command shows that our default broker uses the InMemoryChannel implementation.&lt;/p&gt;

&lt;p&gt;The example implements 2 services (sinks) to receive events: eventingaloha and eventingbonjour.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;aloha-sink.yaml&lt;/em&gt;:&lt;/p&gt;
&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;eventingaloha&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;eventingaloha-v1&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;annotations&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;autoscaling.knative.dev/target&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;1&quot;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;quay.io/rhdevelopers/eventinghello:0.0.2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;bonjour-sink.yaml&lt;/em&gt;:&lt;/p&gt;
&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;eventingbonjour&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;eventingbonjour-v1&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;annotations&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;autoscaling.knative.dev/target&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;1&quot;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;quay.io/rhdevelopers/eventinghello:0.0.2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;They are exactly the same, they are based on the same container image, only the name is different. The name will help to distinguish which service received an event.&lt;/p&gt;

&lt;p&gt;When everything is set up, we will send three different event types to the broker: ‘aloha’, ‘bonjour’, and ‘greetings’. The ‘aloha’ type should go to the eventingaloha service, ‘bonjour’ to the eventingbonjour service, and ‘greetings’ to both. To accomplish this we need triggers.&lt;/p&gt;

&lt;p&gt;Triggers have some limitations. First, you can filter on multiple attributes, e.g.:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  &lt;span class=&quot;na&quot;&gt;filter&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;attributes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;type&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;dev.knative.foo.bar&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;myextension&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;my-extension-value&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;But the attributes are always AND: ‘dev.knative.foo.bar’ AND ‘my-extension-value’. We cannot define a trigger that would filter on ‘aloha’ OR ‘greetings’. We need 2 triggers for that.&lt;/p&gt;

&lt;p&gt;Also a trigger can only define a single subscriber (service). We cannot define a trigger for ‘greetings’ with both the eventingaloha service and the eventingbonjour service as subscribers.&lt;/p&gt;

&lt;p&gt;This means we will need 4 Trigger configurations:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/triggers.png?w=941&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;If you start to seriously work with Knative Triggers, think about a good naming convention for them first. Otherwise troubleshooting could be difficult in case the triggers don’t work as expected: OpenShift Web Console does a very good job at visualizing Knative objects but it ignores Triggers. And this is what you see in the command line:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl get trigger
NAME               READY   REASON   BROKER    SUBSCRIBER_URI   AGE
alohaaloha         True             default                    21h
bonjourbonjour     True             default                    21h
greetingsaloha     True             default                    21h
greetingsbonjour   True             default                    21h
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Our example now looks like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/broker-trigger-example.png?w=861&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;We have the Knative default Broker, 4 Knative Triggers that filter on specific event attributes and pass the events to one or both of the 2 Knative eventing services. We don’t have an event source yet.&lt;/p&gt;

&lt;p&gt;A little further up we saw that the broker listens to the URL&lt;br /&gt;
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://default-broker.knativetutorial.svc.cluster.local&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We will now simply start a pod in our cluster based on a base Fedora image that contains the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curl&lt;/code&gt; command based on this &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curler.yaml&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Pod&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;run&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;curler&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;curler&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;curler&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;fedora:29&lt;/span&gt; 
    &lt;span class=&quot;na&quot;&gt;tty&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Start with:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; knativetutorial apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; curler.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Get a bash shell in the running pod:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectl &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; knativetutorial &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-it&lt;/span&gt; curler &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; /bin/bash
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In the curler pod, we send an event using curl to the broker URL, event type ‘aloha’:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;root@curler /]# curl &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;http://default-broker.knativetutorial.svc.cluster.local&quot;&lt;/span&gt; 
&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-X&lt;/span&gt; POST 
&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Id: say-hello&quot;&lt;/span&gt; 
&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Specversion: 1.0&quot;&lt;/span&gt; 
&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Type: aloha&quot;&lt;/span&gt; 
&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Source: mycurl&quot;&lt;/span&gt; 
&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Content-Type: application/json&quot;&lt;/span&gt; 
&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'{&quot;key&quot;:&quot;from a curl&quot;}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In the OpenShift Web Console we can see that an eventingaloha pod has been started:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/image-6.png?w=794&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;After about a minute this scales down to 0 again. Next test is type ‘bonjour’, again in the curler pod:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;root@curler /]# curl &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;http://default-broker.knativetutorial.svc.cluster.local&quot;&lt;/span&gt; 
&lt;span class=&quot;nt&quot;&gt;-X&lt;/span&gt; POST 
&lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Id: say-hello&quot;&lt;/span&gt; 
&lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Specversion: 1.0&quot;&lt;/span&gt; 
&lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Type: bonjour&quot;&lt;/span&gt; 
&lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Source: mycurl&quot;&lt;/span&gt; 
&lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Content-Type: application/json&quot;&lt;/span&gt; 
&lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'{&quot;key&quot;:&quot;from a curl&quot;}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This starts a eventingbonjour pod as expected:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/image-7.png?w=804&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;If we are fast enough we can check its logs and see our event has been forwarded:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;2020-06-09 08:38:22,348 INFO eventing-hello ce-id=say-hello
2020-06-09 08:38:22,349 INFO eventing-hello ce-source=mycurl
2020-06-09 08:38:22,350 INFO eventing-hello ce-specversion=1.0
2020-06-09 08:38:22,351 INFO eventing-hello ce-time=2020-06-09T08:38:12.512544667Z
2020-06-09 08:38:22,351 INFO eventing-hello ce-type=bonjour
2020-06-09 08:38:22,352 INFO eventing-hello content-type=application/json
2020-06-09 08:38:22,355 INFO eventing-hello content-length=21
2020-06-09 08:38:22,356 INFO eventing-hello POST:{&quot;key&quot;:&quot;from a curl&quot;}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In the last test we send the ‘greetings’ type event:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;root@curler /]# curl &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;http://default-broker.knativetutorial.svc.cluster.local&quot;&lt;/span&gt; 
&lt;span class=&quot;nt&quot;&gt;-X&lt;/span&gt; POST 
&lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Id: say-hello&quot;&lt;/span&gt; 
&lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Specversion: 1.0&quot;&lt;/span&gt; 
&lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Type: greetings&quot;&lt;/span&gt; 
&lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ce-Source: mycurl&quot;&lt;/span&gt; 
&lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Content-Type: applicatio
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And as expected we see pods in both services are started:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/image-8.png?w=818&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;using-apache-kafka&quot;&gt;Using Apache Kafka&lt;/h3&gt;

&lt;p&gt;I didn’t go through the Knative Kafka Example. But since it is hard to find and also the preferable method of setting up a production scale Broker &amp;amp; Trigger pattern for Knative Eventing, I wanted to have it documented here.&lt;/p&gt;

&lt;p&gt;There are actually 2 parts in the Kafka example:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;https://knative.dev/v0.14-docs/eventing/samples/kafka/index.html&quot; target=&quot;_blank&quot;&gt;Start with Installing Apache Kafka&lt;/a&gt;: This will probably work in OpenShift (and CRC), too. But depending on the OpenShift version I would start to install the Strimzi or the Red Hat AMQ Streams operator from the OperatorHub catalog in the OpenShift Web Console and create a Kafka cluster with the help of the installed operator.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Continue with the &lt;a href=&quot;https://knative.dev/v0.14-docs/eventing/samples/kafka/channel/&quot; target=&quot;_blank&quot;&gt;Apache Channel Example&lt;/a&gt;. This example installs a Kafka Channel and uses it together with the Knative Default Broker. In the end, an Event Sink is created, a Trigger that connects the Sink to the Broker, and an Event Source (that uses the Kubernetes API Server to generate events).&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;knative-eventing-recap&quot;&gt;Knative Eventing Recap&lt;/h3&gt;

&lt;p&gt;I have had a look now at both Knative Serving and Knative Eventing:&lt;/p&gt;

&lt;p&gt;I really like Knative Serving, I think it can help a developer be more productive.&lt;/p&gt;

&lt;p&gt;I am undecided about Eventing, though. The Broker &amp;amp; Trigger example based on the InMemoryChannel is easy to set up. But using the InMemoryChannel is for testing and learning only, it is not viable for production. And if I set up my cluster with an instance of Apache Kafka I do ask myself why I should take the messaging detour through Eventing and not use Kafka Messaging in my code directly.&lt;/p&gt;</content><author><name>Harald Uebele</name></author><category term="Knative" /><category term="Kubernetes" /><category term="Serverless" /><summary type="html">This is part 3 of my blog series about Serverless and Knative. I covered Installing Knative on CodeReady Containers in part 1 and Knative Serving in part 2.</summary></entry><entry><title type="html">Serverless and Knative - Part 2: Knative Serving</title><link href="http://haralduebele.github.io/2020/06/03/serverless-and-knative-part-2-knative-serving/" rel="alternate" type="text/html" title="Serverless and Knative - Part 2: Knative Serving" /><published>2020-06-03T00:00:00+00:00</published><updated>2020-06-03T00:00:00+00:00</updated><id>http://haralduebele.github.io/2020/06/03/serverless-and-knative-part-2-knative-serving</id><content type="html" xml:base="http://haralduebele.github.io/2020/06/03/serverless-and-knative-part-2-knative-serving/">&lt;p&gt;In the &lt;a href=&quot;https://haralduebele.github.io/2020/06/02/serverless-and-knative-part-1-installing-knative-on-codeready-containers/&quot; target=&quot;_blank&quot;&gt;first part of this series&lt;/a&gt; I went through the installation of Knative on CodeReady Containers which is basically Red Hat OpenShift 4.4 running on a notebook.&lt;/p&gt;

&lt;p&gt;In this second part I will cover Knative Serving, which is responsible for deploying and running containers, also networking and auto-scaling. Auto-scaling allows scale to zero and is probably the main reason why Knative is referred to as Serverless platform.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/m5EQknfW_400x400.jpg&quot; alt=&quot;Knative logo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Before digging into Knative Serving let me share a piece of information from the &lt;a href=&quot;https://github.com/knative/serving/blob/master/docs/runtime-contract.md&quot; target=&quot;_blank&quot;&gt;Knative Runtime Contract&lt;/a&gt; which helps to position Knative. It compares Kubernetes workloads (general-purpose containers) with Knative workloads (stateless request-triggered containers):&lt;/p&gt;

&lt;p&gt;“&lt;em&gt;In contrast to general-purpose containers, stateless request-triggered (i.e. on-demand) autoscaled containers have the following properties:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;em&gt;Little or no long-term runtime state (especially in cases where code might be scaled to zero in the absence of request traffic).&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;Logging and monitoring aggregation (telemetry) is important for understanding and debugging the system, as containers might be created or deleted at any time in response to autoscaling.&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;Multitenancy is highly desirable to allow cost sharing for bursty applications on relatively stable underlying hardware resources.&lt;/em&gt;”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or in other words: Knative sees itself better suited for short running processes. You need to provide central logging and monitoring because the pods come and go. And multi-tenant hardware can be provided large enough to scale for peaks and at the same time make effective use of the resources.&lt;/p&gt;

&lt;p&gt;As a developer, I would expect Knative to make my life easier (Knative claims that it is “abstracting away the complex details and enabling developers to focus on what matters”) but instead when coming from Kubernetes it gets more complicated and confusing at first because Knative uses new terminology for its resources. They are:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Service&lt;/strong&gt;: Responsible for managing the life cycle of an application/workload. Creates and owns the other Knative objects Route and Configuration.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Route&lt;/strong&gt;: Maps a network endpoint to one or multiple Revisions. Allows Traffic Management.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Configuration&lt;/strong&gt;: Desired state of the workload. Creates and maintains Revisions.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Revision&lt;/strong&gt;: A specific version of a code deployment. Revisions are immutable. Revisions can be scaled up and down. Rules can be applied to the Route to direct traffic to specific Revisions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p style=&quot;color:gray;font-style: italic; font-size: 90%; text-align: center;&quot;&gt;&lt;img src=&quot;/images/2020/06/object_model.png&quot; alt=&quot;Kn object model&quot; /&gt;
(c) knative.dev&lt;/p&gt;

&lt;p&gt;Did I already mention that this is confusing? We now need to distinguish between Kubernetes services and Knative services. And on OpenShift, between OpenShift Routes and Knative Routes.&lt;/p&gt;

&lt;p&gt;Enough complained, here starts the interesting part:&lt;/p&gt;

&lt;h3 id=&quot;creating-a-sample-application&quot;&gt;Creating a sample application&lt;/h3&gt;

&lt;p&gt;I am following this &lt;a href=&quot;https://knative.dev/v0.12-docs/serving/samples/hello-world/helloworld-nodejs/index.html&quot; target=&quot;_blank&quot;&gt;example&lt;/a&gt; from the Knative web site which is a simple Hello World type of application written in Node.js. The sample is also available in Java, Go, PHP, Python, Ruby, and some other languages.&lt;/p&gt;

&lt;p&gt;Instead of using the Docker build explained in the example I am using an OpenShift Binary build which builds the Container image on OpenShift and stores it as an Image stream in the OpenShift Image Repository. Of course, the Container image could also be on Docker Hub or Quay.io or any other repository that you can access. If you follow the Knative example step by step, you create the Node.js application, a Dockerfile, and some more files. On OpenShift, for the Binary build, we need the application code and the Dockerfile and then create an OpenShift project and the Container image with these commands:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc new-project knativetutorial
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc new-build &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; helloworld &lt;span class=&quot;nt&quot;&gt;--binary&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--strategy&lt;/span&gt; docker
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc start-build helloworld &lt;span class=&quot;nt&quot;&gt;--from-dir&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;deploying-an-app-as-knative-service&quot;&gt;Deploying an app as Knative Service&lt;/h3&gt;

&lt;p&gt;Next I continue with the Knative example. This is the service.yaml file required to deploy the ‘helloworld’ example as a Knative Service:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;helloworld-nodejs&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;helloworld-nodejs-v1&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;TARGET&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Node.js&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Sample&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;v1&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you are familiar with Kubernetes, you have to start to pay close attention to the first line, to see that this is the definition of a &lt;em&gt;Knative Service&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;All you need for your deployment are the highlighted lines, specifically the first ‘metadata’.’name’ and the ‘containers’.’images’ specification to tell Kubernetes where to find the Container image.&lt;/p&gt;

&lt;p&gt;Line 11 specifies the location of the Container image just like every other Kubernetes deployment description. In this example, the ‘helloworld’ image is the Image stream in the OpenShift internal Image Repository in a project called ‘knativetutorial’. It is the result of the previous section “Creating a sample application”.&lt;/p&gt;

&lt;p&gt;Lines 12, 13, and 14 are setting an environment variable and are used to “create” different versions. (In the Hello World code, the variable TARGET represents the “World” part.)&lt;/p&gt;

&lt;p&gt;Lines 7 and 8, ‘metadata’ and ‘name’, are optional but highly recommended. They are used to provide arbitrary names for the Revisions. If you omit this second name, Knative will use default names for the Revisions (“helloworld-nodejs-xhz5df”) and if you have more than one version/revision this makes it difficult to distinguish between them.&lt;/p&gt;

&lt;p&gt;With CRC and Knative correctly set up, I simply deploy the service using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;oc:&lt;/code&gt;&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; service.yaml
service.serving.knative.dev/helloworld-nodejs created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The reply isn’t very spectacular but if you look around (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;oc get all&lt;/code&gt;) you can see that a lot has happened:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A Kubernetes Pod is created, running two containers: user-container and Envoy&lt;/li&gt;
  &lt;li&gt;Multiple Kubernetes services are created, one is equipped with an OpenShift route&lt;/li&gt;
  &lt;li&gt;An OpenShift Route is created&lt;/li&gt;
  &lt;li&gt;A Kubernetes deployment and a replica-set are created&lt;/li&gt;
  &lt;li&gt;Knative service, configuration, route, and revision objects are created&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It would have taken a YAML file with a lot more definitions and specifications to accomplish all that with plain Kubernetes. I would say that the Knative claim of “abstracting away the complex details and enabling developers to focus on what matters” is definitely true!&lt;/p&gt;

&lt;p&gt;Take a look at the OpenShift Console, in the Developer, Topology view:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/image.png?w=1024&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I really like the way the Red Hat OpenShift developers have visualized Knative objects here.&lt;/p&gt;

&lt;p&gt;If you click on the link (Location) of the Route, you will see the helloworld-nodejs response in a browser:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/image-4.png?w=614&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;If you wait about a minute or so, the Pod will terminate: “All Revisions are autoscaled to 0”. If you click on the Route location (URL) then, a Pod will be spun up again.&lt;/p&gt;

&lt;p&gt;Another good view of the Knative service is available through the &lt;a href=&quot;https://knative.dev/docs/install/install-kn/&quot; target=&quot;_blank&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kn&lt;/code&gt; CLI&lt;/a&gt; tool:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kn service list
NAME                URL                                                         LATEST                 AGE   CONDITIONS   READY   REASON
helloworld-nodejs   http://helloworld-nodejs-knativetutorial.apps-crc.testing   helloworld-nodejs-v1   13m   3 OK / 3     True  
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kn service describe helloworld-nodejs
Name:       helloworld-nodejs
Namespace:  knativetutorial
Age:        15m
URL:        http://helloworld-nodejs-knativetutorial.apps-crc.testing

Revisions:  
  100%  @latest &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;helloworld-nodejs-v1&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;1] &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;15m&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
        Image:  image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;at 53b1b4&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;

Conditions:  
  OK TYPE                   AGE REASON
  ++ Ready                  15m 
  ++ ConfigurationsReady    15m 
  ++ RoutesReady            15m 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;adding-a-new-revision&quot;&gt;Adding a new revision&lt;/h3&gt;

&lt;p&gt;I will now create a second version of our app and deploy it as a second Revision using a new file, service-v2.yaml:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;helloworld-nodejs&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;helloworld-nodejs-v2&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;TARGET&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Node.js&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Sample&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;v2&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;--&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;UPDATED&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I have changed the revision number to ‘-v2’ and modified the environment variable TARGET so that we can see which “version” is called. Apply with:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; service-v2.yaml
service.serving.knative.dev/helloworld-nodejs configured
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Checking with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kn&lt;/code&gt; CLI we can see that Revision ‘-v2’ is now used:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kn service describe helloworld-nodejs
Name:       helloworld-nodejs
Namespace:  knativetutorial
Age:        21m
URL:        http://helloworld-nodejs-knativetutorial.apps-crc.testing

Revisions:  
  100%  @latest &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;helloworld-nodejs-v2&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;2] &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;23s&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
        Image:  image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;at 53b1b4&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;

Conditions:  
  OK TYPE                   AGE REASON
  ++ Ready                  18s 
  ++ ConfigurationsReady    18s 
  ++ RoutesReady            18s 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It is visible in the OpenShift Web Console, too:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/image-1.png?w=1024&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Revision 2 has now fully replaced Revision 1.&lt;/p&gt;

&lt;h3 id=&quot;traffic-management&quot;&gt;Traffic Management&lt;/h3&gt;

&lt;p&gt;What if we want to Canary test Revision 2? It is just a simple modification in the YAML:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;helloworld-nodejs&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;helloworld-nodejs-v2&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;TARGET&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Node.js&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Sample&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;v2&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;--&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;UPDATED&quot;&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;traffic&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;tag&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;revisionName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;helloworld-nodejs-v1&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;percent&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;75&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;tag&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v2&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;revisionName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;helloworld-nodejs-v2&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;percent&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;25&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This will create a 75% / 25% distribution between revision 1 and 2. Deploy the change and watch in the OpenShift Web Console:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/image-2.png?w=1024&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Have you ever used Istio? To accomplish this with Istio requires configuring the Ingress Gateway plus defining a Destination Rule and a Virtual Service. In Knative it is just adding a few lines of code to the Service description. Have you noticed the “Set Traffic Distribution” button in the screen shot of the OpenShift Web Console? Here you can modify the distribution on the fly:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/image-3.png?w=540&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;auto-scaling&quot;&gt;Auto-Scaling&lt;/h3&gt;

&lt;p&gt;Scale to zero is an interesting feature but without additional tricks (like pre-started containers or pods which aren’t available in Knative) it can be annoying because users have to wait until a new pod is started and ready to receive requests. Or it can lead to problems like time-outs in a microservices architecture if a scaled-to-zero service is called by another service and has to be started first.&lt;/p&gt;

&lt;p&gt;On the other hand, if our application / microservice is hit hard with requests, a single pod may not be sufficient to serve them and we may need to scale up. And preferably scale up and down automatically.&lt;/p&gt;

&lt;p&gt;Auto-scaling is accomplished by simply adding a few annotation statements to the Knative Service description:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;helloworld-nodejs&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;helloworld-nodejs-v3&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;annotations&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;c1&quot;&gt;# the minimum number of pods to scale down to&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;autoscaling.knative.dev/minScale&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;1&quot;&lt;/span&gt;
        &lt;span class=&quot;c1&quot;&gt;# the maximum number of pods to scale up to&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;autoscaling.knative.dev/maxScale&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;5&quot;&lt;/span&gt;
        &lt;span class=&quot;c1&quot;&gt;# Target in-flight-requests per pod.&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;autoscaling.knative.dev/target&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;1&quot;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;image-registry.openshift-image-registry.svc:5000/knativetutorial/helloword:latest&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;TARGET&lt;/span&gt;
              &lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Node.js&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Sample&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;v3&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;--&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Scaling&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;ul&gt;
  &lt;li&gt;minScale: “1” prevents scale to zero, there will always be at least 1 pod active.&lt;/li&gt;
  &lt;li&gt;maxScale: “5” will allow to start a maximum of 5 pods.&lt;/li&gt;
  &lt;li&gt;target: “1” limits every started pod to 1 concurrent request at a time, this is just to make it easier to demo.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All auto-scale parameters are listed and described &lt;a href=&quot;https://knative.dev/docs/serving/configuring-autoscaling/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here I deployed the auto-scale example and run a load test using the &lt;a href=&quot;https://github.com/rakyll/hey&quot;&gt;hey&lt;/a&gt; command against it:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;hey &lt;span class=&quot;nt&quot;&gt;-z&lt;/span&gt; 30s &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 50 http://helloworld-nodejs-knativetutorial.apps-crc.testing/

Summary:
  Total:	30.0584 secs
  Slowest:	1.0555 secs
  Fastest:	0.0032 secs
  Average:	0.1047 secs
  Requests/sec:	477.1042
  
  Total data:	501935 bytes
  Size/request:	35 bytes

Response &lt;span class=&quot;nb&quot;&gt;time &lt;/span&gt;histogram:
  0.003 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;1]	    |
  0.108 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;9563]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.214 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;3308]	|■■■■■■■■■■■■■■
  0.319 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;899]	|■■■■
  0.424 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;367]	|■■
  0.529 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;128]	|■
  0.635 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;42]	|
  0.740 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;15]	|
  0.845 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;10]	|
  0.950 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;5]	    |
  1.056 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;3]	    |

Latency distribution:
  10% &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;0.0249 secs
  25% &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;0.0450 secs
  50% &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;0.0776 secs
  75% &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;0.1311 secs
  90% &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;0.2157 secs
  95% &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;0.2936 secs
  99% &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;0.4587 secs

Details &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;average, fastest, slowest&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;:
  DNS+dialup:	0.0001 secs, 0.0032 secs, 1.0555 secs
  DNS-lookup:	0.0001 secs, 0.0000 secs, 0.0197 secs
  req write:	0.0000 secs, 0.0000 secs, 0.0079 secs
  resp &lt;span class=&quot;nb&quot;&gt;wait&lt;/span&gt;:	0.1043 secs, 0.0031 secs, 1.0550 secs
  resp &lt;span class=&quot;nb&quot;&gt;read&lt;/span&gt;:	0.0002 secs, 0.0000 secs, 0.3235 secs

Status code distribution:
  &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;200]	14341 responses

&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc get pod
NAME                                               READY   STATUS    RESTARTS   AGE
helloworld-nodejs-v3-deployment-66d7447b76-4dhql   2/2     Running   0          28s
helloworld-nodejs-v3-deployment-66d7447b76-pvxqg   2/2     Running   0          29s
helloworld-nodejs-v3-deployment-66d7447b76-qxkbc   2/2     Running   0          28s
helloworld-nodejs-v3-deployment-66d7447b76-vhc69   2/2     Running   0          28s
helloworld-nodejs-v3-deployment-66d7447b76-wphwm   2/2     Running   0          2m35s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In the end of the output we see 5 pods are started, one of them for a longer time (2m 35s) than the rest. That is the minScale: “1” pre-started pod.&lt;/p&gt;

&lt;h3 id=&quot;jakarta-ee-example-from-cloud-native-starter&quot;&gt;Jakarta EE Example from Cloud Native Starter&lt;/h3&gt;

&lt;p&gt;I wanted to see how easy it is to deploy any form of application using Knative Serving.&lt;/p&gt;

&lt;p&gt;I used the authors-java-jee microservice that is part of our &lt;a href=&quot;https://github.com/IBM/cloud-native-starter&quot; target=&quot;_blank&quot;&gt;Cloud Native Starter&lt;/a&gt; project and that we use in an exercise of an OpenShift workshop. A Container image of this service is stored on Dockerhub in my colleague Niklas Heidloffs registry as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;nheidloff/authors:v1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is the Knative service.yaml:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;authors-jee&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;authors-jee-v1&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;docker.io/nheidloff/authors:v1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;When I deployed this I noticed that it never starts (you need to scroll the following view to the right to see the problem):&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kn service list
NAME          URL                                                   LATEST   AGE   CONDITIONS   READY     REASON
authors-jee   http://authors-jee-knativetutorial.apps-crc.testing            33s   0 OK / 3     Unknown   RevisionMissing : Configuration &lt;span class=&quot;s2&quot;&gt;&quot;authors-jee&quot;&lt;/span&gt; is waiting &lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;a Revision to become ready.

&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc get pod
NAME                                         READY   STATUS    RESTARTS   AGE
authors-jee-v1-deployment-7dd4b989cf-v9sv9   1/2     Running   0          42s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The user-container in the pod never starts and the Revision never becomes ready. Why is that?&lt;/p&gt;

&lt;p&gt;To understand this problem you have to know that there are two versions of the authors service: The first version is written in Node.js and listens on port 3000. The second version is the JEE version we try to deploy here. To make it a drop-in replacement for the Node.js version it is configured to listen on port 3000, too. Very unusual for JEE and something Knative obviously does not pick up from the Docker metadata in the image.&lt;/p&gt;

&lt;p&gt;The Knative Runtime Contract has some information about &lt;a href=&quot;https://github.com/knative/serving/blob/master/docs/runtime-contract.md#inbound-network-connectivity&quot; target=&quot;_blank&quot;&gt;Inbound Network Connectivity&lt;/a&gt;, Protocols and Ports:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“The developer MAY specify this port at deployment; if the developer does not specify a port, the platform provider MUST provide a default. Only one inbound &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;containerPort&lt;/code&gt; SHALL be specified in the core.v1.Container specification. The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;hostPort&lt;/code&gt; parameter SHOULD NOT be set by the developer or the platform provider, as it can interfere with ingress autoscaling. Regardless of its source, the selected port will be made available in the PORT environment variable.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I found another piece of information regarding containerPort in the &lt;a href=&quot;https://cloud.ibm.com/docs/containers?topic=containers-serverless-apps-knative#knative-container-port&quot; target=&quot;_blank&quot;&gt;IBM Cloud documentation about Knative&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“By default, all incoming requests to your Knative service are sent to port 8080. You can change this setting by using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;containerPort&lt;/code&gt; specification.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I modified the Knative service yaml with ports.containerPort info:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;authors-jee&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;authors-jee-v2&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;docker.io/nheidloff/authors:v1&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;3000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Note the Revision ‘-v2’! Check after deployment:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kn service list
NAME          URL                                                   LATEST           AGE   CONDITIONS   READY   REASON
authors-jee   http://authors-jee-knativetutorial.apps-crc.testing   authors-jee-v2   11m   3 OK / 3     True    

&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc get pod
NAME                                        READY   STATUS    RESTARTS   AGE
authors-jee-v2-deployment-997d44565-mhn7w   2/2     Running   0          51s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The authors-java-jee microservice is using Eclipse Microprofile and has implemented specific health checks. They can be used as Kubernetes &lt;strong&gt;readiness and liveness probes&lt;/strong&gt;, the YAML file then looks like this, syntax is exactly the standard Kubernetes syntax:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;authors-jee&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;authors-jee-v2&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;docker.io/nheidloff/authors:v1&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;3000&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;livenessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;sh&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-c&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;curl&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-s&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;http://localhost:3000/&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;20&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;readinessProbe&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;exec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;sh&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-c&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;curl&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-s&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;http://localhost:3000/health&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;|&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;grep&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-q&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;authors&quot;&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;initialDelaySeconds&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;40&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;microservices-architectures-and-knative-private-services&quot;&gt;Microservices Architectures and Knative private services&lt;/h3&gt;

&lt;p&gt;So far the examples I tested where all exposed on public URLs using the Kourier Ingress Gateway. This is useful for testing and also for externally accessible microservices, e.g. backend-for-frontend services that serve a browser-based web front end or a REST API for other external applications. The multitude of microservices in a cloud native application will only and should only be called cluster local and not be exposed with an external URL.&lt;/p&gt;

&lt;p&gt;The Knative documentation has information on how to &lt;a href=&quot;https://knative.dev/v0.12-docs/serving/cluster-local-route/&quot; target=&quot;_blank&quot;&gt;label a service cluster-local&lt;/a&gt;. You can either add a label to the Knative service or the Knative route. The steps described in the documentation are to 1. deploy the service and then 2. convert it to cluster-local via the label.&lt;/p&gt;

&lt;p&gt;You can easily add the label to the YAML file and immediately deploy a cluster-local Knative service. This is the modified Jakarta EE example of the previous section:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;serving.knative.dev/v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Service&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;authors-jee&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;serving.knative.dev/visibility&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cluster-local&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;authors-jee-v2&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;docker.io/nheidloff/authors:v1&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;containerPort&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;3000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;When this is deployed to OpenShift, the correct URL shows up in the Route:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/image-5.png?w=493&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Of course you can no longer open the URL in your browser, this address is only available from within the Kubernetes cluster.&lt;/p&gt;

&lt;h3 id=&quot;debugging-tips&quot;&gt;Debugging Tips&lt;/h3&gt;

&lt;p&gt;There are new places to look for information as to why a Knative service doesn’t work. Here is a list of helpful commands and examples:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Display the Knative service:&lt;/p&gt;

    &lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; &lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kn service list
 NAME          URL                                                   LATEST   AGE    CONDITIONS   READY   REASON
 authors-jee   http://authors-jee-knativetutorial.apps-crc.testing            3m7s   0 OK / 3     False   RevisionMissing : Configuration &lt;span class=&quot;s2&quot;&gt;&quot;authors-jee&quot;&lt;/span&gt; does not have any ready Revision.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;It is normal and to be expected that the revision is not available for some time immediately after the deployment because the application container needs to start first. But in this example the revision isn’t available after over 3 minutes and that is not normal.&lt;/p&gt;

    &lt;p&gt;You can also display Knative service info using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;oc&lt;/code&gt; instead of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kn&lt;/code&gt; by using ‘kservice’:&lt;/p&gt;

    &lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; &lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc get kservice
 NAME          URL                                                   LATESTCREATED    LATESTREADY   READY   REASON
 authors-jee   http://authors-jee-knativetutorial.apps-crc.testing   authors-jee-v2                 False   RevisionMissing
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Check the pod:&lt;/p&gt;

    &lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; &lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc get pod
 No resources found &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;knativetutorial namespace.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;That is bad: no pod means no logs to look at.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Get information about the revision:&lt;/p&gt;

    &lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;   &lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc get revision
   NAME             CONFIG NAME   K8S SERVICE NAME   GENERATION   READY   REASON
   authors-jee-v2   authors-jee                      1            False   ContainerMissing

   &lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc get revision authors-jee-v2 &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; yaml
   apiVersion: serving.knative.dev/v1
   kind: Revision
   &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;...]
   status:
     conditions:
     - lastTransitionTime: &lt;span class=&quot;s2&quot;&gt;&quot;2020-06-03T08:12:49Z&quot;&lt;/span&gt;
       message: &lt;span class=&quot;s1&quot;&gt;'Unable to fetch image &quot;docker.io/nheidloff/authors:1&quot;: failed to resolve
         image to digest: failed to fetch image information: GET https://index.docker.io/v2/nheidloff/authors/manifests/1:
       MANIFEST_UNKNOWN: manifest unknown; map[Tag:1]'&lt;/span&gt;
     reason: ContainerMissing
     status: &lt;span class=&quot;s2&quot;&gt;&quot;False&quot;&lt;/span&gt;
     &lt;span class=&quot;nb&quot;&gt;type&lt;/span&gt;: ContainerHealthy
   - lastTransitionTime: &lt;span class=&quot;s2&quot;&gt;&quot;2020-06-03T08:12:49Z&quot;&lt;/span&gt;
     message: &lt;span class=&quot;s1&quot;&gt;'Unable to fetch image &quot;docker.io/nheidloff/authors:1&quot;: failed to resolve
       image to digest: failed to fetch image information: GET https://index.docker.io/v2/nheidloff/authors/manifests/1:
       MANIFEST_UNKNOWN: manifest unknown; map[Tag:1]'&lt;/span&gt;
     reason: ContainerMissing
     status: &lt;span class=&quot;s2&quot;&gt;&quot;False&quot;&lt;/span&gt;
     &lt;span class=&quot;nb&quot;&gt;type&lt;/span&gt;: Ready
   - lastTransitionTime: &lt;span class=&quot;s2&quot;&gt;&quot;2020-06-03T08:12:47Z&quot;&lt;/span&gt;
     status: Unknown
     &lt;span class=&quot;nb&quot;&gt;type&lt;/span&gt;: ResourcesAvailable
 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;...]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;The conditions under the status topic show that I have (on purpose as a demo) mistyped the Container image tag.&lt;/p&gt;

    &lt;p&gt;This is a real example:&lt;/p&gt;

    &lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; &lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;oc get revision helloworld-nodejs-v1 &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; yaml
 &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;...]
 status:
   conditions:
   - lastTransitionTime: &lt;span class=&quot;s2&quot;&gt;&quot;2020-05-28T06:42:14Z&quot;&lt;/span&gt;
     message: The target could not be activated.
     reason: TimedOut
     severity: Info
     status: &lt;span class=&quot;s2&quot;&gt;&quot;False&quot;&lt;/span&gt;
     &lt;span class=&quot;nb&quot;&gt;type&lt;/span&gt;: Active
   - lastTransitionTime: &lt;span class=&quot;s2&quot;&gt;&quot;2020-05-28T06:40:04Z&quot;&lt;/span&gt;
     status: Unknown
     &lt;span class=&quot;nb&quot;&gt;type&lt;/span&gt;: ContainerHealthy
   - lastTransitionTime: &lt;span class=&quot;s2&quot;&gt;&quot;2020-05-28T06:40:05Z&quot;&lt;/span&gt;
     message: &lt;span class=&quot;s1&quot;&gt;'0/1 nodes are available: 1 Insufficient cpu.'&lt;/span&gt;
     reason: Unschedulable
     status: &lt;span class=&quot;s2&quot;&gt;&quot;False&quot;&lt;/span&gt;
     &lt;span class=&quot;nb&quot;&gt;type&lt;/span&gt;: Ready
   - lastTransitionTime: &lt;span class=&quot;s2&quot;&gt;&quot;2020-05-28T06:40:05Z&quot;&lt;/span&gt;
     message: &lt;span class=&quot;s1&quot;&gt;'0/1 nodes are available: 1 Insufficient cpu.'&lt;/span&gt;
     reason: Unschedulable
       status: &lt;span class=&quot;s2&quot;&gt;&quot;False&quot;&lt;/span&gt;
       &lt;span class=&quot;nb&quot;&gt;type&lt;/span&gt;: ResourcesAvailable
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;These conditions clearly show that the cluster is under CPU pressure and unable to schedule a new pod. This was on my first CRC configuration that used only 6 vCPUs.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;hr /&gt;

&lt;p&gt;In my next blog article in this series I will talk about &lt;a href=&quot;https://haralduebele.github.io/2020/06/10/serverless-and-knative-part-3-knative-eventing/&quot;&gt;Knative Eventing&lt;/a&gt;.&lt;/p&gt;</content><author><name>Harald Uebele</name></author><category term="Knative" /><category term="Kubernetes" /><category term="Serverless" /><summary type="html">In the first part of this series I went through the installation of Knative on CodeReady Containers which is basically Red Hat OpenShift 4.4 running on a notebook.</summary></entry><entry><title type="html">Serverless and Knative - Part 1: Installing Knative on CodeReady Containers</title><link href="http://haralduebele.github.io/2020/06/02/serverless-and-knative-part-1-installing-knative-on-codeready-containers/" rel="alternate" type="text/html" title="Serverless and Knative - Part 1: Installing Knative on CodeReady Containers" /><published>2020-06-02T00:00:00+00:00</published><updated>2021-09-29T00:00:00+00:00</updated><id>http://haralduebele.github.io/2020/06/02/serverless-and-knative-part-1-installing-knative-on-codeready-containers</id><content type="html" xml:base="http://haralduebele.github.io/2020/06/02/serverless-and-knative-part-1-installing-knative-on-codeready-containers/">&lt;p&gt;I have worked with Kubernetes for quite some time now, also with Istio Service Mesh. Recently I decided that I want to explore Knative and its possibilities.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/06/m5EQknfW_400x400.jpg&quot; alt=&quot;Knative logo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This blog post and the following 2 posts on Knative are based on Red Hat OpenShift. The instructions unfortunately don’t seem to work anymore and are also based on a very old version of Knative (v0.12). As I no longer have access to OpenShift (I retired from IBM) I can’t update the blog articles with tested and working instructions. But if you want to have a more current Knative Serving experience based on Minikube, you can test drive a &lt;a href=&quot;https://harald-u.github.io/knative-on-minikube/&quot; target=&quot;_blank&quot;&gt;workshop&lt;/a&gt; I have created. It is currently based on Knative v0.26.&lt;/p&gt;

&lt;p&gt;So what is Knative? The &lt;a href=&quot;https://knative.dev/&quot; target=&quot;_blank&quot;&gt;Knative web site&lt;/a&gt; describes it as “components build on top of Kubernetes, abstracting away the complex details and enabling developers to focus on what matters.” It has two distinct components, originally it were three:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Knative Build. It is no longer part of Knative, it is now a project of its own: “&lt;a href=&quot;https://github.com/tektoncd&quot; target=&quot;_blank&quot;&gt;Tekton&lt;/a&gt;”&lt;/li&gt;
  &lt;li&gt;Knative Serving, responsible for deploying and running containers, also networking and auto-scaling. Auto-scaling allows scale to zero and is the main reason why Knative is referred to as Serverless platform.&lt;/li&gt;
  &lt;li&gt;Knative Eventing, connecting Knative services (deployed by Knative Serving) with events or streams of events.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This will be a series of blogs about installing Knative, Knative Serving, and Knative Eventing.&lt;/p&gt;

&lt;p&gt;In order to explore Knative you need to have access to an instance, of course, and that may require installing it yourself. The &lt;a href=&quot;https://knative.dev/v0.12-docs/install/&quot; target=&quot;_blank&quot;&gt;Knative documentation&lt;/a&gt; (for v0.12) has instructions on how to install it on many different Kubernetes platforms, including Minikube. Perfect, Knative on my notebook.&lt;/p&gt;

&lt;h2 id=&quot;installation&quot;&gt;Installation&lt;/h2&gt;

&lt;p&gt;I followed the instructions for Minikube and installed it, and started a tutorial. At some point, I finished for the day, and stopped Minikube. The next morning it wouldn’t start again. I tried to find out what went wrong and in the end deleted the Minikube profile, recreated it, and reinstalled Knative again. Just out of curiosity I restarted Minikube and ran into the very same problem. This time I was a little more successful with my investigation and found this issue: &lt;a href=&quot;https://github.com/knative/eventing/issues/2544&quot; target=&quot;_blank&quot;&gt;https://github.com/knative/eventing/issues/2544&lt;/a&gt;. I thought about moving to Knative 0.14 shortly but then decided to test it on OpenShift. If you read some of my previous blogs you may know that I am &lt;a href=&quot;https://haralduebele.github.io/2019/09/13/red-hat-openshift-4-on-your-laptop/&quot; target=&quot;_blank&quot;&gt;a fan of CodeReady Containers (CRC)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Knative on Red Hat OpenShift is called OpenShift Serverless. It has been a preview (“beta”) for quite some time but since end of April 2020 it is GA, generally available, no longer preview only. According to the &lt;a href=&quot;https://access.redhat.com/articles/4912821&quot; target=&quot;_blank&quot;&gt;Red Hat OpenShift documentation&lt;/a&gt; OpenShift Serverless v1.7.0 is based on Knative 0.13.2 (as of May 1st, 2020) and it is tested on OpenShift 4.3 and 4.4. The CRC version I am currently using (v1.10) is built on top of OpenShift 4.4. So it should work.&lt;/p&gt;

&lt;p&gt;The hardware or cluster size requirements for OpenShift Serverless are steep: minimum 10 CPUs and 40 GB of RAM. I only have 8 vCPUs (4 cores) and 32 GB of RAM in my notebook and I do need to run an Operating System besides CRC but I thought I give it a try. I started Knative installation on a CRC config using 6 vCPUs and 20 GB of RAM and so far it seems to work. I have tried it on smaller configurations and got unschedulable pods (Memory and/or CPU pressure).&lt;/p&gt;

&lt;p&gt;Installation is accomplished via an OpenShift Serverless Operator and it took me probably less then 20 minutes to have both Knative Serving and Eventing installed by just following the &lt;a href=&quot;https://access.redhat.com/documentation/en-us/openshift_container_platform/4.4/html/serverless_applications/installing-openshift-serverless-1&quot; target=&quot;_blank&quot;&gt;instructions&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Install the OpenShift Serverless operator&lt;/li&gt;
  &lt;li&gt;Create a namespace for Knative Serving&lt;/li&gt;
  &lt;li&gt;Create Knative Serving via the Serverless operators API. This also installs &lt;a href=&quot;https://github.com/knative/net-kourier&quot; target=&quot;_blank&quot;&gt;Kourier&lt;/a&gt; as “an open-source lightweight Knative Ingress based on Envoy.” Kourier is a lightweight replacement for Istio.&lt;/li&gt;
  &lt;li&gt;Create a namespace for Knative Eventing&lt;/li&gt;
  &lt;li&gt;Create Knative Eventing via the Serverless operators API.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I have started and stopped CRC many times now and it doesn’t have the issues that Minikube had.&lt;/p&gt;

&lt;p&gt;As a future exercise I will test the Knative Add-on for the IBM Cloud Kubernetes Service. This installs Knative 0.14 together with Istio on top of Kubernetes and requires a minimum of 3 worker nodes with 4 CPUs and 16 GB om memory (b3c.4x16 is the machine specification).&lt;/p&gt;

&lt;p&gt;In the next blog article I will cover &lt;a href=&quot;https://haralduebele.github.io/2020/06/03/serverless-and-knative-part-2-knative-serving/&quot;&gt;Knative Serving&lt;/a&gt; with an example from the Knative documentation.&lt;/p&gt;</content><author><name>Harald Uebele</name></author><category term="Knative" /><category term="Kubernetes" /><category term="Serverless" /><summary type="html">I have worked with Kubernetes for quite some time now, also with Istio Service Mesh. Recently I decided that I want to explore Knative and its possibilities.</summary></entry><entry><title type="html">Two great additions to ‘kubectl’</title><link href="http://haralduebele.github.io/2020/05/20/two-great-additions-to-kubectl/" rel="alternate" type="text/html" title="Two great additions to ‘kubectl’" /><published>2020-05-20T00:00:00+00:00</published><updated>2020-05-20T00:00:00+00:00</updated><id>http://haralduebele.github.io/2020/05/20/two-great-additions-to-kubectl</id><content type="html" xml:base="http://haralduebele.github.io/2020/05/20/two-great-additions-to-kubectl/">&lt;p&gt;I started to learn Kubernetes in its vanilla form. Almost a year ago I made my first steps on Red Hat OpenShift. From then on, going back to vanilla Kubernetes made me miss the easy way you switch namespaces (aka projects) in OpenShift. With ‘oc project’ it is like switching directories on your notebook. You can do that with ‘kubectl’ somehow but it is not as simple.&lt;/p&gt;

&lt;p&gt;Recently I found 2 power tools for kubectl: ‘kubectx’ and ‘kubens’. Ahmet Alp Balkan, a Google Software Engineer, created them and open sourced them (&lt;a href=&quot;https://github.com/ahmetb/kubectx&quot; target=&quot;_blank&quot;&gt;https://github.com/ahmetb/kubectx&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The Github repo has installation instructions for macOS and diferent flavours of Linux. When you install them, also make sure to install ‘fzf’ (“A command-line fuzzy finder”, &lt;a href=&quot;https://github.com/junegunn/fzf&quot; target=&quot;_blank&quot;&gt;https://github.com/junegunn/fzf&lt;/a&gt;), it is a cool addition.&lt;/p&gt;

&lt;h3 id=&quot;kubens&quot;&gt;kubens&lt;/h3&gt;

&lt;p&gt;‘kubens’ allows you to quickly switch namespaces in Kubernetes. Normally you work in ‘default’ and whenever you need to check something or do something in another namespace you need to add the ‘-n namespace’ parameter to your command.&lt;/p&gt;

&lt;p&gt;‘kubens istio-system’ will make ‘istio-system’ your new home and a subsequent ‘kubectl get pod’ or ‘kubectl get svc’ will show the pods and services in istio-system. Thats not all.&lt;/p&gt;

&lt;p&gt;‘kubens’ without a parameter will list all namespaces and with ‘fzf’ installed too you have a selectable list:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/05/peek-2020-05-20-09-13.gif&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I think that is even better than ‘oc projects’!&lt;/p&gt;

&lt;h3 id=&quot;kubectx&quot;&gt;kubectx&lt;/h3&gt;

&lt;p&gt;‘kubectx’ is really helpful when you work with multiple Kubernetes clusters. I typically work with a Kubernetes cluster on the IBM Cloud (IKS) and then very often start CRC (CodeReady Containers) to try something out on OpenShift. When I log into OpenShift, my connection to the IKS cluster drops. It actually doesn’t drop but the kube context is switched to CRC. With ‘kubectx’ you can switch between them.&lt;/p&gt;

&lt;p&gt;In this example I have two contexts, one is CRC, the other IKS (Kubernetes on IBM Cloud):&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/05/2020-05-20_09-26.png?w=603&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Not exactly easy to know which one is which, isn’t it? But you can set aliases for the entries like this:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectx &lt;span class=&quot;nv&quot;&gt;CRC&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;default/api-crc-testing:6443/kube:admin
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;kubectx &lt;span class=&quot;nv&quot;&gt;IKS&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;knative/br1td2of0j1q10rc8aj0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And then you get a list with recognizable names:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2020/05/peek-2020-05-20-10-14.gif&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;You can now switch via the list. In addition, with ‘kubectx -‘ you can switch to the previous context.&lt;/p&gt;

&lt;p&gt;When you constantly create new kube contexts, e.g. create new CRC or Minikube instances, this list may grow and get unmanageable. But with ‘kubectx -d &lt;NAME&gt;' you can delete entries from the list. (They will still be in the kube context, though.)&lt;/NAME&gt;&lt;/p&gt;</content><author><name>Harald Uebele</name></author><category term="Kubernetes" /><summary type="html">I started to learn Kubernetes in its vanilla form. Almost a year ago I made my first steps on Red Hat OpenShift. From then on, going back to vanilla Kubernetes made me miss the easy way you switch namespaces (aka projects) in OpenShift. With ‘oc project’ it is like switching directories on your notebook. You can do that with ‘kubectl’ somehow but it is not as simple.</summary></entry></feed>