K8S Tools Sharing
Kubecost Core Architecture Overview
Kustomize
kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.

kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.
Data Locality: Bring your data close to compute.
https://www.alluxio.io/
Make your data local to compute workloads for Spark caching, Presto caching, Hive caching and more.
Data Accessibility: Make your data accessible.
No matter if it sits on-prem or in the cloud, HDFS or S3, make your files and objects accessible in many different ways.
Data On-Demand: Make your data as elastic as compute.
Effortlessly orchestrate your data for compute in any cloud, even if data is spread across multiple clouds.
Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.
https://slurm.schedmd.com/
As we have some business requirements about data aggregation and online processing, so we did a quick PoC on Apache Druid. Next I will show how to build druid quickly and start your ingestion task.
1.Select release version which is compatible to your existing system and download the package.
2.Choose what kind of druid service you want to start with
Then you start druid service in every node by execute start-cluster script.
3.Visit druid through browser, http://IP:8888
Next I load the data from local file and can ingest the data file as a datasource, and finally query data by SQL.
Task configuration
{
"type": "index_parallel",
"id": "index_parallel_wikiticker-2015-09-12-sampled_2020-02-18T11:17:29.236Z",
"resource": {
"availabilityGroup": "index_parallel_wikiticker-2015-09-12-sampled_2020-02-18T11:17:29.236Z",
"requiredCapacity": 1
},
"spec": {
"dataSchema": {
"dataSource": "wikiticker-2015-09-12-sampled",
"parser": {
"type": "string",
"parseSpec": {
"format": "json",
"timestampSpec": {
"column": "time",
"format": "iso"
},
"dimensionsSpec": {
"dimensions": [
"channel",
"cityName",
"comment",
"countryIsoCode",
"countryName",
"isAnonymous",
"isMinor",
"isNew",
"isRobot",
"isUnpatrolled",
"namespace",
"page",
"regionIsoCode",
"regionName",
"user"
]
}
}
},
"metricsSpec": [
{
"type": "count",
"name": "count"
},
{
"type": "longSum",
"name": "sum_added",
"fieldName": "added",
"expression": null
},
{
"type": "longSum",
"name": "sum_deleted",
"fieldName": "deleted",
"expression": null
},
{
"type": "longSum",
"name": "sum_delta",
"fieldName": "delta",
"expression": null
},
{
"type": "longSum",
"name": "sum_metroCode",
"fieldName": "metroCode",
"expression": null
}
],
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "DAY",
"queryGranularity": "HOUR",
"rollup": true,
"intervals": null
},
"transformSpec": {
"filter": null,
"transforms": []
}
},
"ioConfig": {
"type": "index_parallel",
"firehose": {
"type": "local",
"baseDir": "/opt/druid-0.16.0/quickstart/tutorial",
"filter": "wikiticker-2015-09-12-sampled.json.gz",
"parser": null
},
"appendToExisting": false
},
"tuningConfig": {
"type": "index_parallel",
"maxRowsPerSegment": null,
"maxRowsInMemory": 1000000,
"maxBytesInMemory": 0,
"maxTotalRows": null,
"numShards": null,
"partitionsSpec": null,
"indexSpec": {
"bitmap": {
"type": "concise"
},
"dimensionCompression": "lz4",
"metricCompression": "lz4",
"longEncoding": "longs"
},
"indexSpecForIntermediatePersists": {
"bitmap": {
"type": "concise"
},
"dimensionCompression": "lz4",
"metricCompression": "lz4",
"longEncoding": "longs"
},
"maxPendingPersists": 0,
"forceGuaranteedRollup": false,
"reportParseExceptions": false,
"pushTimeout": 0,
"segmentWriteOutMediumFactory": null,
"maxNumConcurrentSubTasks": 1,
"maxRetry": 3,
"taskStatusCheckPeriodMs": 1000,
"chatHandlerTimeout": "PT10S",
"chatHandlerNumRetries": 5,
"maxNumSegmentsToMerge": 100,
"totalNumMergeTasks": 10,
"logParseExceptions": false,
"maxParseExceptions": 2147483647,
"maxSavedParseExceptions": 0,
"partitionDimensions": [],
"buildV9Directly": true
}
},
"context": {
"forceTimeChunkLock": true
},
"groupId": "index_parallel_wikiticker-2015-09-12-sampled_2020-02-18T11:17:29.236Z",
"dataSource": "wikiticker-2015-09-12-sampled"
}
Task running status
Task finished, you can see the item in datasource/segment/query
SpringOne Platform 2019 in Austin, https://springoneplatform.io/
Cloud platforms provide a wealth of benefits for the organizations that use them. However, there’s no denying that adopting the cloud can put strains on DevOps teams. Developers must use microservices to architect for portability, meanwhile operators are managing extremely large hybrid and multi-cloud deployments. Istio lets you connect, secure, control, and observe services.
First, download Istio release version, unzip the package and enter the directory.
Second, verify installation environment
bin/istioctl verify-install
Next, deploy Istio and select the demo profile which enable many features like tracing/kiali/grafana
bin/istioctl manifest apply --set profile=demo
Then, check Istio pods’ status, make sure all the related pods are running
Istio Commands
istioctl experimental authz
)
Reply