Skip to main content

· 7 min read

Introduction

As a developer or company, have you ever felt the need to have just another simpler way of monitoring your AWS Billings aside from Email notifications? Well, in this article, I will show you how it is easy to use Vanus and two connectors (AWS Billing Source & Slack Sink) to receive your AWS Billing reports on a Slack channel. This is made possible using Vanus Connect.

Vanus Connect is a set of producers and consumers to provide interoperability across services, systems, and platforms.

img/img_1.png

Vanus Connect has two kinds of connectors; Source connector and Sink Connector. The Source Connector obtains data from an underlying data producer (e.g. AWS Billing) and delivers the data to its target after the data has been transformed into CloudEvents. The Sink Connector receives the events with CloudEvent formats, processes the events, and sends them downstream to a consumer(e.g. Slack)

AWS Billing Source

The AWS Billing Source Connector uses AWS Cost Explorer API to pull billing data from the previous day. The AWS Billing source requires a “target URL” to which it will send the CloudEvents. In this demo, we will be using Vanus Gateway to receive these CloudEvents. You will also need programmatic access credentials (access_key_id & secret_access_key) from your AWS account.

Slack Sink

The Slack sink connector handles incoming CloudEvents in a way that extracts the data part of the original event and delivers these extracted data to Slack channels. To set up Slack sink correctly, you need to create a Slack App and the app should have chat:write and chat:write.public permission

Setting Up AWS Billing Source & Slack Sink using Vanus Connect

In this demo, we will be using the Vanus Playground; A cloud Kubernetes environment. We will create the Slack sink connector first to receive incoming CloudEvents before setting up the AWS Billing Source connector.

To begin, we will install Vanus with the command:

kubectl apply -f https://vanus.s3.us-west-2.amazonaws.com/releases/v0.4.0/vanus.yaml

Confirm that you have it installed:

kubectl get po -n vanus

You should see something like this:

vanus-controller-0               1/1     Running   0          96s
vanus-controller-1 1/1 Running 0 72s
vanus-controller-2 1/1 Running 0 69s
vanus-gateway-8677fc868f-rmjt9 1/1 Running 0 97s
vanus-store-0 1/1 Running 0 96s
vanus-store-1 1/1 Running 0 68s
vanus-store-2 1/1 Running 0 68s
vanus-timer-5cd59c5bf-hmprp 1/1 Running 0 97s
vanus-timer-5cd59c5bf-pqkd5 1/1 Running 0 97s
vanus-trigger-7685d6cc69-8jgsl 1/1 Running 0 97s

Next, we will install vsctl, the command line tool for Vanus

curl -O https://vsctl.s3.us-west-2.amazonaws.com/releases/v0.4.0/linux-amd64/vsctl
chmod ug+x vsctl
mv vsctl /usr/local/bin

Vsctl needs a gateway URL to communicate with Vanus. Get the URL from the service called vanus-gateway. To do this, we have to export the IP address as an Environmental Variable

export VANUS_GATEWAY=192.168.49.2:30001

Setting Up Slack Sink

Now, let’s create a directory for our Slack Sink. To do this, use

mkdir slack

Our directory name is called slack, you can call it what you want. Next, we will move to that directory and create our config.yml file

cd slack

img/img_2.png

Inside the slack directory, create a file called config.yml

touch config.yml 

img/img_3.png

To set up Slack sink, we will need to set up our Slack App first and obtain the necessary credentials to receive our Billing report. To set up our Slack App, we will follow these steps:

Create a slack channel where you want to receive the billing Create Slack App. The App should have at least chat:write and chat:write.public permission Create an App from scratch Select the App name and Slack Workspace Add permissions; chat:write and chat:write.public Install OAuth tokens to your workspace Copy the token which we will use in our config.yml file

Now, open the file with your text editor, for this, I will be using vim text editor vi config.yml and paste the following code. Ensure you replace the default, app_name, default_channel, and token with yours.

apiVersion: v1
kind: Service
metadata:
name: sink-slack
namespace: vanus
spec:
selector:
app: sink-slack
type: ClusterIP
ports:
- port: 8080
name: sink-slack
---
apiVersion: v1
kind: ConfigMap
metadata:
name: sink-slack
namespace: vanus
data:
config.yml: |-
default: "input_default_name"
slack:
- app_name: "input_app_name"
token: "input_your_token_here"
default_channel: "#input_default_channel"


---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sink-slack
namespace: vanus
labels:
app: sink-slack
spec:
selector:
matchLabels:
app: sink-slack
replicas: 1
template:
metadata:
labels:
app: sink-slack
spec:
containers:
- name: sink-slack
# For China mainland
# image: linkall.tencentcloudcr.com/vanus/connector/sink-slack:latest
image: public.ecr.aws/vanus/connector/sink-slack:latest
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
volumeMounts:
- name: config
mountPath: /vance/config
volumes:
- name: config
configMap:
name: sink-slack

NOTE: The default and app name MUST be the same.

Now, run kubectl apply -f config.yml to set up the Slack Sink img/img_4.png

Verify Slack sink is running

kubectl get pods -n vanus

img/img_5.png

We want to export our slack sink as an environmental variable so we can easily use it later on

export SLACK_SINK=$(kubectl get pods -n vanus | grep slack | awk '{ print $1 }')

Setting Up AWS Billing Source

Just as we set up our Slack sink in a directory, we will also create a directory for our AWS Billing

mkdir billing

Change to the new directory and create two files; config.yml and secret.yml. The config.yml will take the configuration of our billing reports such as our target URL and the secret.yml will contain the access key and secret key obtained from your AWS console (IAM - Programmatic access)

img/img_6.png

Before updating our config.yml and secret.yml file. We need to create an Eventbus. An Eventbus represents a group of pipelines that receive and dispatch events. To create the eventbus,

vsctl eventbus create --name billing

Here, for simplicity, I have named the eventbus billing

Our target URL is the URL where we want to send cloud events to. For this, we will use our Vanus Gateway (192.168.49.2:30001). The target URL follows a specific pattern which is

http://<ip_address>:<port>/gateway/<eventbus>

Use vim editor to update our config.yml file. Add to the file this line of code:

"target": "http://192.168.49.2:30001/gateway/billing"

Next, we create our secret.yml file We need our access key and secret key for this. Open your secret.yml file with vim text editor and paste this code

"access_key_id": "xxxxxx"
"secret_access_key": "xxxxxx"

Replace the "xxxxxx" with your credentials obtained from your AWS account.

Just before we run our config.yml and secret.yml to receive our billing report on our Slack channel, we need first to create a subscription which will help us transform the data coming from the AWS Billing report source to data that can be accepted by our Slack sink

Move out of your current directory and be on the root directory, use cd ..

Creating a Subscription

To create a subscription, paste this line of code

vsctl subscription create \
--eventbus billing \
--sink 'http://sink-slack:8080' \
--transformer '{
"define": {
"source" : "$.source",
"dataDate": "$.data.date",
"dataService": "$.data.service",
"dataAmt": "$.data.amount",
"dataUnit": "$.data.unit"
},
"template": "{\"subject\": \"AWS Billing Report\",\"message\":\"Hello, Your AWS Billing Report on ${dataDate} for ${dataService} is ${dataAmt} ${dataUnit}\"}"
}'

img/img_7.png

You should see some outputs like the one above. Now we have created a subscription for our Slack sink and also have transformed our data to be compatible with our Slack sink. We can now send CloudEvents from our AWS Billing Source and we will receive the output on our Slack Sink.

To do that, move back to our AWS Billing directory with cd and run this code:

nohup docker run --network=host --rm -v ${PWD}:/vance/config -v ${PWD}:/vance/secret public.ecr.aws/vanus/connector/source-aws-billing > billing.log &

The above code will run a docker command to launch your config and secret file and output your response to a billing.log file

To see the output of your billing.log use cat billing.log

Finally, you can check your Slack App to see the Billing report

img/img_8.png

Conclusion

In this article, we have been able to successfully receive our AWS Billing reports on our Slack Channel. This will help you as a developer or Company to have just another simpler way of receiving AWS Billing reports.

· 6 min read

OpenAI released an Optimizing Language Model for Dialogue named ChatGPT at the end of 2022. Once it was released, ChatGPT gained great attention and traffic, causing much discussion on online platforms.

An AI unicorn start-up company is committed to becoming an infrastructure builder and content application creator in the era of AIGC. The virtual robot is the main business direction of this company. Alexis is the infrastructure leader of the AI company, and his team is mainly responsible for developing online platforms, hyper-scale offline training tasks, and big data engines. A key feature of their product is the ability to intelligently answer questions in real-time, making online platforms real-time nature extremely important.

blog

Problems Encountered During Rapid Development

The surge in users brings more fault alerts

As expected, the trend of AI companies saw a surge in users in a short time, and the demand for cloud products has also increased accordingly. They chose three cloud vendors to adopt Hybrid Cloud solutions. Cloud product failures are inevitable; the GPU failure rate in large scenarios is the highest for the infrastructure team, and many of these failures are repeated.

The standard process is that when a failure occurs, the developer will receive an alert from the cloud product by email, and the developer needs to get in touch with the corresponding cloud vendor Customer Service as soon as possible to notify the cloud vendor quickly solve the problem.

"My point of view is that there is no need to invest manpower in cases where the scenarios are clear. Before using Vanus, our team needed to be on call 24 hours a day to check for any alarms, and then manually connect with the three cloud vendors. The IM(instant messaging)tools used by Customer Service of each cloud vendor are different, and the personnel composition and behavior habits of each vendor are also different, which brings a lot of communication costs to developers. If it cannot be repaired in time, it will even affect the normal use of the intelligent platform. It is better to use code to achieve unified automated management. The problem is how to converge different cloud products into one application and distribute them to different IM tools."

The delay of the manual alarm may shut down the entire platform business

Even if the staff is on call 24 hours a day, it is difficult to ensure the timeliness of every alarm.

"If the cloud vendor's server storage fails, the entire platform business will be shut down, which means that all users cannot use the product. After the problem is resolved, we need to apologize to all individual users and business users, and even compensate them."

For an emerging product, the stability of the product is extremely important. If the user experience is good, you may retain the user permanently.

Traditional message queues are not friendly to Kubernetes

"When we make technology selection, the most basic requirement is to deploy on Kubernetes, because all our businesses run on K8s."

The mainstream message queues on the market run on physical servers or virtual machines, such as Kafka. It is inherently unfriendly to Kubernetes, and its strong reliance on page cache leads to performance degradation and requires additional manual operation and maintenance.

The original alarm data requires additional code transformation

The original alarm data is a large JSON file, requiring developers to write a lot of code to convert the JSON file into human-readable information, and then send it to the cloud vendor's Customer Service. Different alarm data and different functions require additional codes. As alarms change and increase, developers must write codes continuously and put them into applications, raising development costs and later operation and maintenance costs.

How was the problem solved?

For a start-up company, time is life, and it may be difficult to win against similar products if it is late. In the beginning, Alexis' team gave up on the solution developed by themselves. To save labor costs and ensure the timeliness of alarms, they urgently need to find an alarm notification system that can fully automate alarms and meet the requirements of product deployment on Kubernetes. After comparing and screening similar products on the market, Vanus was finally selected.

Automatically distribute alerts to different IM tools

When developers receive alerts from different cloud products, they need to reverse the alarm to Customer Service of the cloud vendor, and Customer Service will deal with the alarm content in a timely manner. Now the Alexis team sends the alarm events of different cloud vendors to Vauns in a unified way, and then uses Vanus's rich Connector features to automatically distribute the alarms to the corresponding IM tools of the cloud vendors.

"Vanus Connector is very helpful to us, we only need to do some simple configuration to automatically distribute to the corresponding Customer Service personnel. After being familiar with the basic concepts of Vanus, we can complete the configuration in about 10 minutes. For example, push a Prometheus alarm to the Slack group and send alarm emails, text messages, etc., which helps us complete the upstream and downstream access requirements traditional messaging products cannot do."

No additional data transformation code is required to speed up business iteration

"Vanus's Transform function helped us complete the data conversion requirements. Otherwise, we would have to write a lot of glue code to connect the upstream and downstream systems. Now we only need to use Vanus' Susbcription to configure it."

Vanus is a message queue with built-in event processing capabilities. Through the simple configuration of Vanus transform function, the JSON file of the original alarm data is converted into human-readable specific alarm information, and then distributed to the corresponding cloud vendor customer service communication tool through the Connector. Developers are not required to continuously add conversion codes to the program, which speeds up the iteration speed of the business.

How does it feel to use Vanus?

With Vanus, my team and I rebuilt the alarm notification system to achieve fully automated notifications with simple configuration. Rich Connectors make the system extremely scalable. If you need to add new data sources or receiving channels, you can use your other Connectors, which is equivalent to systematically solving the problem of alarm notification automation.

"What attracts me most about Vanus is that it can be used as a base for us to build an event-driven system. The event-driven architecture can provide our system with excellent scalability and improve the speed of our business iteration. In addition, Vanus' fully k8s-oriented design is very useful to us, because all our business runs on k8s, saving us many maintenance costs.

In the next step, we plan to apply Vanus to the real-time synchronization scenario of our internal data, and use the Connector to synchronize the MySQL data to the MongoDB database in real-time. Afterward, it became more clear that we wanted to do our training Workflow around Vanus. In addition, some internal teams are using Redis's pub/sub and Pulsar, and we plan to converge to Vanus together. "

· 7 min read

The architecture of web applications has changed a lot as the infrastructure evolves. Over the last decade, with the trend of migrating infrastructure from private data centers to a public cloud, increasing number of monolithic architectures have been replaced by microservice architectures.

Considering apps may not share the same computational space when it comes to the cloud, it's inevitable that local function invocations in monolithic apps have been replaced by other remote communication protocols, like REST APIs or RPC calls.

Such synchronous request-response pattern generally works well, but sometimes it doesn't. For example, in a chain of synchronous requests and responses, service A calls service B, service B calls service C, and calls go on. If each service needs 500ms to handle its business and there are 5 services in the chain, then you have to set the timeout of service A as 2500ms, while the service B needs 2000ms and 500ms for the last service in the chain.

In this article, we will introduce an alternative communication pattern (Event Driven Architecture) which enables asynchronous communication to avoid synchronous request chains. Then dive into the concepts of event-driven architecture and how you can better build it in the next generation cloud computing - Serverless computing.

img_1.png

· 3 min read

As a GitHub and OSS enthusiast, have you ever experienced missing some of your important GitHub emails because tons of unread emails waiting in your mailbox . Or you wanna receive a custom notification when someone stars your repo but GitHub can never send such notifications to you.

Now, problem solved! You can receive custom event notifications directly in your Twitter account without checking your email by using a Serverless Message Queue — Vanus.

github.png

· 14 min read

Abstract: This article recreates the message system's history from its birth to the present in a narrative form based on a thread of the development of the Internet. Since 1983, message systems have experienced different historical tough times. Their use modes, function features, product forms and application scenarios have changed a lot. The author chose five representative products from different eras and described the historical background of their generation. Focusing on the core problems that had been solved, the author attempted to analyze the key factors of their success. Finally, the author made three predictions about the serverless era and pointed out the core sore points of the current messaging system in tackling the serverless scenarios, and concluded the key capabilities of future messaging products.

mq.png