This post contains the information while participating in the AEWS (Amazon EKS Workshop Study) study by CloudNet@.

 

1. Amazon EKS Introduction

According to the official Amazon EKS User Guide (link), Amazon Web Services (AWS) describes it as a managed service that eliminates the need to install, operate, and maintain a Kubernetes control plane. There is a diagram of the control plane in the open-source Kubernetes documentation (link) that explains the architecture of a Kubernetes cluster.

In the image regarding the control plane, there are five components: cloud-controller-manager, etcd, kube-api-server, scheduler, and Controller Manager. Instead of installing and managing these components directly, by creating a managed service like Amazon EKS, one can utilize Kubernetes nodes. For more detailed information, refer to the EKS workshop description (link).

 

The open-source Kubernetes can be checked for newly updated versions through the link https://github.com/kubernetes/kubernetes/releases, where detailed explanations of the version numbers are also available at link.

x.y.z | x: major version, y: minor version, z: patch version

2. EKS Workshop environment and EC2 bastion VM configuration

For this study, we prepared our AWS accounts in advance following the "Start with an AWS Account" section in the EKS workshop. For setting up the practical environment, tasks ranged from setting up AWS Cloud9, installing kubectl, to installing eksctl. Thanks to the AWS CloudFormation prepared by our study leader, Mr. Kasida, we were able to participate in the study comfortably. Based on March 2024, the time of conducting this study, we chose version v1.28 for EKS, which supports add-ons and is among the most compatible and validated versions with numerous applications within the K8s ecosystem. To understand the AWS environment we are working within during the study, we referenced AWS architecture icons and schematically represented it as follows..

After downloading the CloudFormation template as described below, we tried executing it using the AWS CLI (link).

$ curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/myeks-1week.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 10373  100 10373    0     0   180k      0 --:--:-- --:--:-- --:--:--  180k
$ aws cloudformation deploy --template-file myeks-1week.yaml --stack-name myeks --parameter-overrides KeyName=kp-ian SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 --region ap-northeast-2

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - myeks

After executing the following command, you can find out the IP address, and by using this IP address to run SSH, you can access the Shell and proceed with the subsequent tasks. The SSH ID and Password to connect can be found in the above CloudFormation template file, so please refer to it.

aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[*].OutputValue' --output text

Once you've accessed the working EC2 instance, it's necessary to configure IAM user credentials. For the convenience of the practice, enter the credentials of an IAM User with administrator privileges.

[root@myeks-host ~]# aws ec2 describe-instances

Unable to locate credentials. You can configure credentials by running "aws configure".
[root@myeks-host ~]# aws configure
AWS Access Key ID [None]: AKI..........
AWS Secret Access Key [None]: FQ.......................
Default region name [None]: ap-northeast-2
Default output format [None]: json
[root@myeks-host ~]# aws ec2 describe-instances
{
    "Reservations": [
        {
            "Groups": [],
            "Instances": [
                {
                    "AmiLaunchIndex": 0,
                    "ImageId": "ami-025cebb6913219d99",...........

 

3. Cluster creation using eksctl

In the EKS workshop content (link), clusters are created using the eksctl command with yaml files. However, it's also possible to pass basic options directly to the eksctl command in the appropriate parameter format, and this method was explored in the study. The necessary option values were stored and utilized as environment variables.

 

3.1. Environment variables

The $AWS_DEFAULT_REGION and $CLUSTER_NAME environment variables are already prepared on the working (bastion) EC2 instance. We checked these and then went ahead to set up the remaining environment variables.

[root@myeks-host ~]# echo $AWS_DEFAULT_REGION
ap-northeast-2
[root@myeks-host ~]# echo $CLUSTER_NAME
myeks
[root@myeks-host ~]# export VPCID=$(aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" | jq -r .Vpcs[].VpcId)
[root@myeks-host ~]# echo "export VPCID=$VPCID" >> /etc/profile
[root@myeks-host ~]# export PubSubnet1=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-PublicSubnet1" --query "Subnets[0].[SubnetId]" --output text)
rt PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-PublicSubnet2" --query "Subnets[0].[SubnetId]" --output text)
echo "export PubSubnet1=$PubSubnet1" >> /etc/profile
echo "export PubSubnet2=$PubSubnet2" >> /etc/profile
[root@myeks-host ~]# export PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-PublicSubnet2" --query "Subnets[0].[SubnetId]" --output text)
[root@myeks-host ~]# echo "export PubSubnet1=$PubSubnet1" >> /etc/profile
[root@myeks-host ~]# echo "export PubSubnet2=$PubSubnet2" >> /etc/profile
[root@myeks-host ~]# echo $VPCID
vpc-06019251cc08c519b
[root@myeks-host ~]# echo $PubSubnet1,$PubSubnet2
subnet-09c63523c434bcaec,subnet-0244ef5fa73c2f986

3.2. EKS cluster creation

Once the preparation is complete, you can execute the following command to proceed.

eksctl create cluster --name $CLUSTER_NAME --region=$AWS_DEFAULT_REGION --nodegroup-name=$CLUSTER_NAME-nodegroup --node-type=t3.medium \
--node-volume-size=30 --vpc-public-subnets "$PubSubnet1,$PubSubnet2" --version 1.28 --ssh-access --external-dns-access --verbose 4​

 

It will take about 15-20 minutes, so let's wait for a bit. In the meantime, opening another terminal and executing the following command will help check whether the cluster has been created.

while true; do aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output text ; echo "------------------------------" ; sleep 1; done

 

Once the cluster creation is complete, the terminal status will change as follows.

You can also check the deployed EKS details in the AWS console (if the console hasn't refreshed, try clicking the refresh button).

 

After the EKS cluster creation is complete, you can execute various commands on the EKS cluster using the kubectl command. During the study, we tried various things and confirmed many aspects, but I intend to document just one of those in the blog.

4. Check the created EKS cluster - endpoint access (Public -> Public and private)

To check the information of the EKS cluster, you can use the command "kubectl cluster-info".

(awesian@myeks:N/A) [root@myeks-host ~]# eksctl get nodegroup --cluster $CLUSTER_NAME --name $CLUSTER_NAME-nodegroup
CLUSTER NODEGROUP       STATUS  CREATED                 MIN SIZE        MAX SIZE        DESIRED CAPACITY        INSTANCE TYPE   IMAGE ID        ASG NAME              TYPE
myeks   myeks-nodegroup ACTIVE  2024-03-09T18:02:34Z    2               2               2                       t3.medium       AL2_x86_64      eks-myeks-nodegroup-eac71230-bb27-1b00-6c14-e2c96dfc5646       managed
(awesian@myeks:N/A) [root@myeks-host ~]# kubectl cluster-info
Kubernetes control plane is running at https://088CD22A78682CF5F017CFEE329E3C1A.gr7.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://088CD22A78682CF5F017CFEE329E3C1A.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Additionally, the "eksctl get cluster" command can also be used to check the information. One notable point was that the created endpoint was public. Being public means that the endpoint is accessible over the network. To proceed with actions like creating Pods through this endpoint, additional authentication is required. However, for simple tasks like version checking, access to the created EKS cluster was possible without any separate authentication when the endpoint is public.

Even when checked from the console, the API server endpoint access is listed as "Public".

Let's change the API server endpoint access to "Public and Private". To detect changes, we can use a total of three terminals. Two of these terminals will be used for monitoring purposes.

# Terminal A - for monitoring
APIDNS=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.endpoint | cut -d '/' -f 3)
dig +short $APIDNS
while true; do dig +short $APIDNS ; echo "------------------------------" ; date; sleep 1; done

# Terminal B - for another monitoring
N1=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2a -o jsonpath={.items[0].status.addresses[0].address})
N2=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2c -o jsonpath={.items[0].status.addresses[0].address})
while true; do ssh ec2-user@$N1 sudo ss -tnp | egrep 'kubelet|kube-proxy' ; echo ; ssh ec2-user@$N2 sudo ss -tnp | egrep 'kubelet|kube-proxy' ; echo "------------------------------" ; date; sleep 1; done

# Terminal C - Public(with only one IP address)+Private. It will take 8-10 minutes.
aws eks update-cluster-config --region $AWS_DEFAULT_REGION --name $CLUSTER_NAME --resources-vpc-config endpointPublicAccess=true,publicAccessCidrs="$(curl -s ipinfo.io/ip)/32",endpointPrivateAccess=true

After waiting, you can observe in Terminal A that the section previously displaying 2 public IPs has suddenly changed to internal network subnets.

The lack of change on the right side may be because, with both Public and Private options activated, there's no need to terminate existing network connections that kube-proxy and kubelet have already established.

After the change, executing "kubectl" commands may not work. Attempting to run it could result in an error message, indicating that the visible IP address is not a Public IP. This implies that, with the cluster settings altered, the Endpoint now returns a Private IP.

(awesian@myeks:N/A) [root@myeks-host ~]# kubectl get node -v=6
I0310 03:44:52.743735   18383 loader.go:395] Config loaded from file:  /root/.kube/config
I0310 03:45:23.611890   18383 round_trippers.go:553] GET https://088CD22A78682CF5F017CFEE329E3C1A.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500  in 30861 milliseconds
I0310 03:45:23.612005   18383 helpers.go:264] Connection error: Get https://088CD22A78682CF5F017CFEE329E3C1A.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500: dial tcp 192.168.1.51:443: i/o timeout
Unable to connect to the server: dial tcp 192.168.1.51:443: i/o timeout
(awesian@myeks:N/A) [root@myeks-host ~]# kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: dial tcp 192.168.2.122:443: i/o timeout

 

Connection timeout implies that you need to make additional settings in the EKS Control plane security group to enable access to the subnet. Using the following command, an additional rule was set in the node security group to allow access to the node (pod) from myeks-host.

# EKS ControlPlane Security Group ID
aws ec2 describe-security-groups --filters Name=group-name,Values=*ControlPlaneSecurityGroup* --query "SecurityGroups[*].[GroupId]" --output text
CPSGID=$(aws ec2 describe-security-groups --filters Name=group-name,Values=*ControlPlaneSecurityGroup* --query "SecurityGroups[*].[GroupId]" --output text)
echo $CPSGID

# Add a rule to enable connection from myeks-host to nodes (pods) in the security group
aws ec2 authorize-security-group-ingress --group-id $CPSGID --protocol '-1' --cidr 192.168.1.100/32

Also, let's change the settings for kubelet and kube-proxy to connect to private IP addresses. Run the following command:

# kube-proxy rollout
kubectl rollout restart ds/kube-proxy -n kube-system

# Kubelet is applied by running systemctl restart kubelet on individual nodes. The $N1 and $N2 environment variables must be set.
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo systemctl restart kubelet; echo; done

After running the first command above, you can see that the connection to kube-proxy is made with a private IP.

 

If the second command runs successfully, you can see that the kubelet is also connecting to the private IP.

5. Resource deallocation

After completing the exercise, be sure to delete resources to minimize unnecessary costs.

  • Deleting an Amazon EKS cluster (takes about 10 minutes): eksctl delete cluster --name $CLUSTER_NAME
  • To delete the AWS CloudFormation stack after the above process is completed: aws cloudformation delete-stack --stack-name myeks

 

'Amazon EKS (Elastic Kubernetes Service)' 카테고리의 다른 글

[AEWS] Study Week 2 - ENS Networking  (0) 2024.03.17

(Recovered from my old article - originally posted on 2017.03.14 10:32 KST)

 

(Note: This is English version. If you want to see Korean version, please visit http://ko.sdndev.net/11.)

 

There were a second OpenStack upstream training in Korea from 10 AM to 5AM on February 11th, 2017.

 

OpenStack Days Korea 2017 will be held on the middle of July, not February held in the last year, so I organized the second upstream training much longer (morning session and afternoon session) than the first training (only three hours). More details on the second upstream training are available at: http://openstack-kr.github.io/upstream-training/2017/ .

 

Also, I organized a local study program whose goals are to learn upstream training materials and help the second upstream training on last November. Around twenty members started to participate in the study program, and about 10 members finally agreed to help the second upstream training program as mentors. Thanks to them, I feel that the second upstream training was so successful :)

 

(Study content: https://github.com/openstack-kr/openstack-study/tree/master/2016-fall-upstream )

 

- Studying upstream training materials #1 -

 

- Studying upstream training materials #2 -

 

 

In Korea, Toz (URL: http://www.toz.co.kr/index.htm) is the place where meeting rooms are provided for various events such as seminars, small meetups, and studyings, and I have been using Toz with the kind support from Naver D2. However, when I considered the place for second upstream training, I finally concluded that Toz would not be the best place for doing hands-on-labs, discussing with upstream topics, and mentoring activities. Fortunately, Naver D2 agreed with the situation, and decided to provide D2 Startup Factory which is very large! It accommodates max. of 150 attendees :) Moreover, Ubuntu Korea Community has joined with the organization of the second upstream training with Launchpad (https://launchpad.net/ ) content.

 

However.. it was just the four days before the actual upstream training. At that time, there were the final studying meetup and attendees were discussing the preparation of the actual upstream training. We finally figured out that there would be less number of power plugs in D2 Startup Factory! We actually need about 40 power plugs for attendees and mentors. It implies that we need to find out multi strips for many number of power plugs. D2 Startup Factory did not have enough multi strips. Fortunately, I have finally found two institutions who kindly agreed to lend multi strips.

 

1. NAIM Networks (http://www.naimnetworks.com/)

This is the company where I worked from Oct 2013 to Aug 2014. Since the company also provides SDN trainings, I thought that the company might have the spare multi strips if there will be no training activities on the same days as the second upstream training. I asked to my previous colleague and team manager, and they said to me that some spare multi strips for server racks would be available! They finally agreed to provide three 14-multi strips :)

 

- Multi strips with 14 sockets! (Thanks to NAIM Networks, Korea) -

 

 

2. MODU Labs (http://modulabs.co.kr/)

 

[TBD - to be translated later, sorry]

 

2015년 초, 인공지능 스터디를 참여하였던 적이 있는데 그 때 현재 모두의연구소 소장님이신 김승일 님을 알게 되었다. 연구소 취지가 좋다고 생각하여, 시간이 허락될 때 스터디 하나를 직접 참가하고 싶은 마음이 크지만, 현실적으로 오픈스택을 보면서 모두의 연구소에서 하는 부분까지 살펴보는 건 참으로 쉽지 않은 것 같다. 소장님께서도 외부 교육 등을 많이 진행하시기에 물어보았으며, 금요일 오전에 연락되어 소장님께선 연구소에 안 계시지만 다른 분의 도움을 받아 힘들게 멀티탭 6구를 여러 개 대여할 수 있었다.

 

이러한 준비 + 많은 분들의 도움 덕택에 이번 2회 업스트림 트레이닝은 지난 번 1회때보다 구체적인 목표를 갖고

여러 질문 & 답변과 함께 재미있는 시간을 보낼 수 있었다고 생각한다.

 

- 제2회 업스트림 트레이닝 행사 끝나고 단체 사진 -

 

- 실습 중 -

 

- Etherpad를 사용한 설명 중 -

 

- 쉬는 시간이지만 토의중 -

 

- 즐거운 토론 중 -

 

 

다음 업스트림 트레이닝 행사도 잘 진행되면 좋을 것 같은데.. D2의 도움도 계속 필요할 것 같고

무엇보다 여러 멘토 분들과 함께하였지만 사실 혼자 Full로 진행하기에는 다소 지치는 측면도 있어..

보다 많은 한국 분들께서 오픈스택 업스트림에 기여하시고, 그 경험을 바탕으로 같이 즐겁게 차후 행사를 할 수 있으면 정말 좋을 것 같다는 생각을 해본다.

 

(Recovered from my old article - originally posted on 2016.02.25 02:59 KST)

 

(Note: This is English version. If you want to see Korean version, please visit http://ko.sdndev.net/10.)

 

There was the first local OpenStack Upstream Training in Korea on February 18, 2016, Thursday.

It was announced with OpenStack Days Korea 2016 (http://event.openstack.or.kr/program.html).

The announcement for OpenStack Upstream Training in Korea 2016 is as follows:

 

TrackV : OpenStack Upstream Training
Overview It is great honor to hold the brief version (160 minutes) of Upstream Training, which originally takes two days.
Upstream Training is designed to be practical for OpenStack Upstream Open Source developers. More information on Upstream Training is available on http://docs.openstack.org/upstream-training/ 
This local training is organized with shorter version of official Upstream Training, with Korean translated slides.
Date & Time February 18, 2016 (Thu), 13:00~16:10 (total 160 minutes)


Online engagement (e.g., IRC, Mailing List, Slack, ...) is recommended.
Location Track V (Ruby&Jade), 3F, Jamsil Lotte Hotel    See location
Expected # of Trainees 30 people Early registration is needed    Early registration
Fee Free (requirement: OpenStack Days Korea 2016 registration)
Preparation Laptop with Wifi
(Recommended: Ubuntu 14.04 + 4GB RAM VM for DevStack)
Staff Ian Y. Choi (preparation, training, assistant)
Stephan Ahn (preparation, training, assistant)
Sungjin Kang (preparation, training, assistant)
Namgon Lucas Kim, Junsik Shin, Jungsu Han (GIST - attended Tokyo Upstream Training, assistant & mentoring)
Reference 1. OpenStack Upstream Training Official Document (docs.openstack.org/upstream-training)
2. My OpenStack Upstream Training Experience (before Tokyo Summit) by Ian Y. Choi
※ The detail schedule is subject to change.

 

Total 35 people were pre-registered for the training, and 29 people attended. Among of them, 24 people actively participated in the training with Etherpad and Ubuntu VM.

 

Photos were taken by ujuc! :) Also, you can find Etherpad on https://etherpad.openstack.org/p/upstream-training-korea-2016 and translated slides on http://docs.openstack.org/ko_KR/upstream-training/.

 

 

 

 

 

 

Thank you very much for all the attendees, and I really appreciate overall help from many staffs!

(Recovered from my old article - originally posted on 2016.02.16 10:02 KST)

 

Last January, I shared my presentation to explain the followings:

 

Title: Open Hardware & Sources + Azure for an use case: indoor positioning

- Slide: http://1drv.ms/1PAOx3n

 

It explains why I chose Azure for one use case: indoor positioning application. I used one Linux virtual machine in Azure for the use case.

 

 

 

 

 

Moreover, from slide 8, you can see how Dashboard is different: between Azure and OpenStack.

 

 

If you want to see the demonstration video, please see

http://1drv.ms/1LqxPQc

 

 

 

 

(Recovered from my old article - originally posted on 2016.02.02 00:05 KST)

 

I participated in OpenStack study on last Friday. In the study, there were two presentations 

which study attendees wanted to listen to, but could not see last year. Moreover, attendees discussed 

how we could study more effectively in 2016.

 

Facebook notice: https://www.facebook.com/events/1711379062437713/

 
I would like to briefly summarize those presentations.
 

1. codetree: Installing OpenStack using his shell scripts in more automated manner

 

 

 

He already presented the topic on last July. However, last week, he presented more details with updated shell scripts: version 2.

The followings are main changes compared to version 1

: Extracting and unifying duplicated functionalities into shell script functions => "common" directory

: Tested how nova-docker is installed and how we can create Docker instances

: Tested OpenStack installation base virtual machine images using PXE

 

 

Shell script sources are available on: https://github.com/openstack-kr/study_devops.

 

The scripts are so convenient that we do not iterate much manual stuff.

One of remarkable things is that the scripts followed official OpenStack installation guide (Kilo).

For example, "kilo-step-01.sh" means that the script file follows Chapter 1 in OpenStack Kilo installation guide.

So, by studying the scripts, people can better understand how we install OpenStack with official installation guide.

 

- Slide link: https://onedrive.live.com/redir?resid=4A848F40E8EF8761%21572

 

2. Sungwon: HA using DVR

 

 

 

He presented last week because he could not attend on last December.

DVR (Distributed Virtual Router), which was integrated in OpenStack Juno release,

enables to distribute lots of network services, which were previously maintained in one Neutron server instance.

 

I was so impressed by his presentation because he customized codetree's shell scripts.

He forked codetree's GitHub repository, and added DVR installation and integration into his forked repository.

 

- Slide link: https://onedrive.live.com/redir?resid=4A848F40E8EF8761%21575

 

(Recovered from my old article - originally posted on 2014.12.12 08:13 KST)

 

I attended "Online MidoNet Network Virtualization Meetup" on last 09 Dec (URL: http://www.meetup.com/Online-MidoNet-Meetup/). This article briefly talks about this webinar and my experiences installing Midostack.

 

Midonet from Midokura is designed to provide the following network functionalities by placing a network virtualization layer between cloud management platform layer (e.g., OpenStack) and Hypervisor layer (e.g., KVM).

 - Logical Switching: decoupling Layer 2 and Layer 3 in physical networks

 - Logical Routing: supporting routers in virtual networks

 - Logical Firewall: kernel integrated, high performance, distributed firewall

 - Logical Layer 4 Load Balancer: application load balancing in software

 - API: integrating with cloud management platforms using RESTful API

Midonet is a open source following Apache 2 license, aiming at open, and user- & vendor-neutrality for production network.

 

According to Midokura, Midonet implemented functionalities in a Kernel level and interacts multiple hosts with MidoNet agents, so creating and managing logical topology such as overlay network are easier.

 

Midonet chose a distributed model, not a centralized model to address failures (e.g., SPOF, active/stand-by failover), scalability, and network efficiency issues.

 

 

 

Midostack is degigned to experience Midonet with OpenStack, a open source cloud management platform. When you install Midostack, DevStack is automatically downloaded and executed, so Midostack is for Midonet open source contributors and the people who want to learn Midonet. Currently, Micostack only supports Ubuntu 14.04 Linux distribution, and when you want to deploy Midonet for production environment, Packstack RDO running in CentOS or RHEL7 is recommended (URL: https://openstack.redhat.com/MidoNet_integration). It operates with OpenStack Icehouse release.

 

Also, more details on Midonet and "how to contribute to Midonet" were well explained in that Webinar. I think some slides will be open soon.

 

I have been so curious how Midostack runs OpenStack with DevStack scripts. So, I installed Midostack and executed several midonet-cli commands. The followings are my basic configuration to install Midostack.

 

- Virtualization platform I used: VirtualBox 4.3

- OS: Ubuntu 14.04 LTS (64 bit)

- Basic configuration: 8GB RAM, dynamically allocated disk with 50GB, NAT configuration

(Midostack by default assumes that the public network range is 200.200.200.0/24. If you want to test network functionalities, please configure some setting files before installing Midostack, or please configure proper network settings in VirtualBox.)

 

It is easy. You can just input the following commands, according to http://www.midonet.org/#quickstart, with a few assumptions:

 

- Your Ubuntu 14.04 should be up-to-date. (If not, please execute 'sudo apt-get update', 'sudo apt-get upgrade', 'sudo apt-get dist-upgrade', and 'sudo reboot'.)

- You need to install 'git'. To install git, please execute 'sudo apt-get install git'.)

 

 $ git clone http://github.com/midonet/midostack

 $ cd 

 $ ./midonet_stack.sh

 

After executing those commands, the latest source codes of Midonet, DevStack, and OpenStack components are downloaded, installed, and some basic logical routers used by Midonet will be successfully configured.

(About two weeks ago, I needed to install the latest protobuf version, but now it seems that it has been resolved when you install Midostack using the latest scripts from git.)

 

But, unfortunately, my installation failed when creating a logical router after installing DevStack. I asked to IRC community, and one guy gave me a solution: 'execute ./midonet_unstack.sh and ./midonet_stack.sh', and after then it works very well! (Thanks, tfukushima!)

 

 

 

Successfully installed similar to DevStack! with some additional stuff related to Midonet.

 

 

Horizon: by default, 200.200.200.0/24 public network has been created. I created one VM instance.

 

 

The following figure illustrates my execution of midonet-cli console. Using midonet-cli, I can check tenant lists in OpenStack. Also, I can check lists of logical routers, their ports, hosts, and chains for pre-routing and post-routing for routers.

 

 

It seems that I can experience more if I configure Midostack with multi-nodes. I simply discussed it to IRC, and a guy highly recommended me to configure multi-node environment with Packstack RDO. Please just refer to this information (This information might be very helpful for some guys who want to configure midonet with multi-node environment I think.)

 

 

[References]

http://www.midonet.org/#quickstart

- Slides from "Online MidoNet Network Virtualization Meetup" (http://www.meetup.com/Online-MidoNet-Meetup/)

http://komeiy.hatenablog.com/entry/2014/11/13/012401

- Midonet IRC!

(Recovered from my old article - originally posted on 2014.11.15 21:51 KST)

 

This article explains how I successfully translated Japanese Ryu-book to Korean even though I am not well familiar with Japanese. I used VisualTran Mate (http://en.visualtran.com/?type=en) software as a translation helper tool.

 

Ryu is a SDN controller which is written in Python. The Ryu-book (https://osrg.github.io/ryu/resources.html) explains how to develop SDN applications using Ryu controller with a very illustrative manner. And, on April, I found that all the texts and book publishing tools are uploaded to Github repository (https://github.com/osrg/ryu-book). I also found that all the translation procedures are fulfilled by git commits. This is why I finally decided to translate that book to Korean. I really wanted help more Korean people to see this illustrative book.

 

 

At the first time, I have no privileges on editing osrg/ryu-book git repository. So, I forked that repository. After commiting my translation results to my forked repository, then I can make a pull request to this original repository. The bottom figure shows my forked repository for ryu-book.

 

 

To do translations on my computer using this data, I needed to retrieve that source to my computer. I executed 'git' command and cloned that forked repository to my computer.

 

 

 

The original Japanese texts are written as *.rst files. Those files are fully text files with UTF-8 format. 

 

 

VisualTran Mate program supports MS word (*.doc, *.docx) files, but text files can be open very well in MS word. So, I opened rst files in MS word. The bottom figure shows 'rest_api.rst' file opened from my MS word program.

 

 

You can find that VisualTran Mate ribbon menu is loaded in MS word. When I click a VisualTran Mate  icon, that program is executed and automatically detects a source language (Japanese) and a target language (Korean, because I am using Korean WIndows). 

 

 

The bottom figure shows machine-translating using Microsoft Bing. VisualTran Mate supports machine-translation using Microsoft Bing.

 

 

However, the translation quality of machine-translation results is not good enough to liberally read in Korean, although it is said that the accuracy of Japanese->Korean translation is about 95%. Sometimes there are missing spaces, and some words are not proper on contexts. For me, I have three advantages for better translation. 

 

 1) Although I am not good at Japanese, I can read Katakana characters. Japanese Katakana characters are used to write foreign words such as 'flow table' and 'link aggregation'.

 

 2) There is an English edition of ryu-book (https://osrg.github.io/ryu-book/en/html/). When I do not understand some sentences, I find corresponding English sentences, understand what those sentences mean, and reflect my understanding to Korean sentences.

 

 3) VisualTran Mate is a good tool which shows original sentences, machine-translated sentences, and my translating sentences simultaneously. This is very powerful because without this help, I might usually press several ALT+TABs, find corresponding sentences displayed in different programs, and compare those sentences.

 

 

Finally, I completed my translation, pulled my translation results to the original ryu-book repository, and now my translation results are publicly available.

 pdfmobiepubhtml

 

First, I would like to very appreciate Ryu-book team. The people in Ryu-book team first made Japanese ryu-book, and then English ryu-book. Without those books, I could not translate well to Korean. Moreover, their GitHub repository is very powerful for collaborating translations with open-source mind. And also, thank you so much for VisualTran Mate, which minimized my lots of manual stuff related to that translation.

'OpenFlow&SDN' 카테고리의 다른 글

Wireshark 1.12.0 well supports OpenFlow 1.0 & 1.3!  (0) 2024.03.09
RYU SDN Framework: Korean Translation  (0) 2024.03.09

(Recovered from my old article - originally posted on 2014.09.13 16:49 KST)

 

Previously, many developers and engineers needed to add OpenFlow dissector to WireShark to analyze OpenFlow protocol packets in WireShark. It was so difficult stuff, because we needed to match the version of OpenFlow dissector and the version of Wireshark.

 

According to Wireshark wiki page (http://wiki.wireshark.org/OpenFlow), OpenFlow dissector will be available on Wireshark 1.12.0, and this version was released as 'Stable Release' on 31 July, 2014.

 

I have downloaded this stable release version and checked that this version well supports both OpenFlow 1.0 & 1.3!

 

Just download from Wireshark homepage (https://www.wireshark.org/download.html)and install the latest stable release of Wireshark.

After installation, you can see that your Wireshark supports OpenFlow 1.0, 1.3 and 1.4.

 

I have downloaded & installed Wireshark 1.12.0 on my Windows computer and it worked very well!

 

[Menu: Supported Protocols]

 

[Parts from supported protocols]

 

Here are some screenshots which show that OpenFlow 1.0 & 1.3 filters work very well:

 

[OpenFlow 1.0: openflow_v1]

 

[OpenFlow 1.3: openflow_v4]

 

'OpenFlow&SDN' 카테고리의 다른 글

How I translated Japanese Ryu-book to Korean  (0) 2024.03.10
RYU SDN Framework: Korean Translation  (0) 2024.03.09

(Recovered from my old article - originally posted on 2014.07.26 22:20 KST)

 

I have finished the draft translation of "RYU SDN Framework", written in Japanese & English.

 

HTML: http://ianychoi.github.io/ryu-book/ko/html/

PDF: http://ianychoi.github.io/ryu-book/ko/Ryubook.pdf

 

This book was published in Japanese first, on the early of this year

(http://osrg.github.io/ryu-book/ja/html/)

and after several months, the English edition of this book was also published

(http://osrg.github.io/ryu-book/en/html/).

 

I'm not good at Japanese, but I mainly used Google translator (http://translate.google.com).

So please give me feedback through github (https://github.com/ianychoi/ryu-book/).

 

Currently, the PDF edition has line-feed problems.

I think it is because of the conflict between ko.tex & listings packages used in Latex,

but it is a little bit difficult for me to solve this problem..

 

I hope that more Korean developers will read this book and contribute to SDN worlds!

 

Note: Ryu is a SDN controller written by Python, and supports various OpenFlow versions: 1.0, 1.2, 1.3 and 1.4.

+ Recent posts