About Daggy
Common information about Daggy and Getting Started
Daggy
Daggy - Data Aggregation Utility and C/C++ developer library for data streams catching
Daggy main goals are server-less, cross-platform, simplicity and ease-of-use.
Daggy can be helpful for developers, QA, DevOps and engineers for debug, analyze and control any data streams, including requests and responses, in distributed network systems, for example, based on micro-service architecture.
In short terms, daggy run local or remote processes at the same time, simultaneously read output from processes, stream and aggregate them under once session
Daggy Screencast

Introduction and goal concepts

The Daggy Project consist of:
  1. 1.
    Core - library for streams aggregation and catching
  2. 2.
    Daggy - console application for aggregation streams into files

Daggy High Level Design

Daggy High Level Design

Basic terms

The main goal of Daggy Software System is obtaining the data from envorinments that located in sources to streams into aggregators and via providers.
Environment contains data for streams. Out of box, Core supports local and remote environments, but can be extended by user defined environments. Local Environment is located on the same host, that Daggy Core instance. Remote Environment is located on the different from Daggy Core instance host. User defined environment can be located anywhere, likes databases, network disks, etc.
Sources are declarations, how to obtain the data from environments. It descirbes, which kind of data need to be conveted to streams and which provider will need.
There is example of sources that contains once local environment and once remote environment:
1
aliases:
2
- &my_commands
3
pingYa:
4
exec: ping ya.ru
5
extension: log
6
pingGoo:
7
exec: ping goo.gl
8
extension: log
9
10
- &ssh_auth
11
user: {{env_USER}}
12
passphrase: {{env_PASSWORD}}
13
14
sources:
15
local_environment:
16
type: local
17
commands: *my_commands
18
remote_environment:
19
host: 192.168.1.9
20
type: ssh2
21
parameters: *ssh_auth
22
commands: *my_commands
Copied!
The streams from local environment are generates via local provider (looks at type: local).
The streams from remote environment are generates via ssh2 provider (looks at type: ssh2).
Out of box Core provides local and ssh2 providers. Both providers obtains the data for streams from processes - the local provider runs local process and generates streams from process channels (stdin and stdout). Ssh2 provider runs remote processes via ssh2 protocol and also generates streams from process channels. The Daggy Core can be extended by user defined provider that will generate streams, for example, from http environment.
Providers generate streams by parts via commands. The each part has unique seq_num value, uninterruptedly and consistently. It means, that full data from stream can be obtain by adding parts of stream in seq_num ascending order. Each stream can be generated by command.
The Core translates streams from any count of providers in once Core Streams Session. The streams from Core Streams Session can be aggregated by aggregators or viewed by user.
Out of box, the Core provides several types of aggregators:
  1. 1.
    File - aggregates streams into files at runtime, as data arrives. This aggregator is used by Daggy Console Application.
  2. 2.
    Console - aggreagates streams into console output. This aggregator is used by Daggy Console Application.
  3. 3.
    Callback - aggregates streams into ANSI C11 callbacks. This aggregator is used by Core ANSI C11 Interface.
The Core library can be extended by user defined aggregators.

Getting Started

Getting Daggy

Download and install Windows, Linux, MacOS

Download archives with binaries or installation packages from last release

Fedora

1
sudo dnf install daggy daggy-devel
Copied!

Make install for system library

Build requirenments: Conan, cmake, git and C++17/20 compiler.
1
git clone https://github.com/synacker/daggy.git
2
mkdir build
3
cd build
4
conan install ../daggy --build=missing -o package_deps=True
5
conan build ../daggy
6
cmake install
Copied!
Conan create for conan package
1
git clone https://github.com/synacker/daggy.git
2
mkdir build
3
cd build
4
conan create ../daggy --build=missing
Copied!

Add as conan package dependency

conanfile.py
1
def requirements(self):
2
self.requires("daggy/2.1.2")
Copied!

Check installation of Daggy Core C++17/20 interface

test.cpp
1
#include <DaggyCore/Core.hpp>
2
#include <DaggyCore/Sources.hpp>
3
#include <DaggyCore/aggregators/CFile.hpp>
4
#include <DaggyCore/aggregators/CConsole.hpp>
5
6
#include <QCoreApplication>
7
#include <QTimer>
8
9
namespace {
10
constexpr const char* json_data = R"JSON(
11
{
12
"sources": {
13
"localhost" : {
14
"type": "local",
15
"commands": {
16
"ping1": {
17
"exec": "ping 127.0.0.1",
18
"extension": "log"
19
},
20
"ping2": {
21
"exec": "ping 127.0.0.1",
22
"extension": "log",
23
"restart": true
24
}
25
}
26
}
27
}
28
}
29
)JSON";
30
}
31
32
int main(int argc, char** argv)
33
{
34
QCoreApplication app(argc, argv);
35
daggy::Core core(*daggy::sources::convertors::json(json_data));
36
37
daggy::aggregators::CFile file_aggregator("test");
38
daggy::aggregators::CConsole console_aggregator("test");
39
40
core.connectAggregator(&file_aggregator);
41
core.connectAggregator(&console_aggregator);
42
43
QObject::connect(&core, &daggy::Core::stateChanged, &core,
44
[&](DaggyStates state){
45
if(state == DaggyFinished)
46
app.quit();
47
});
48
49
QTimer::singleShot(3000, &core, [&]()
50
{
51
core.stop();
52
});
53
54
QTimer::singleShot(5000, &core, [&]()
55
{
56
app.exit(-1);
57
});
58
59
core.prepare();
60
core.start();
61
62
return app.exec();
63
}
Copied!

Check installation of Daggy Core C11 interface

test.c
1
#include <stdio.h>
2
#ifdef _WIN32
3
#include <Windows.h>
4
#else
5
#include <unistd.h>
6
#endif
7
8
#include <DaggyCore/Core.h>
9
10
const char* json_data =
11
"{\
12
\"sources\": {\
13
\"localhost\" : {\
14
\"type\": \"local\",\
15
\"commands\": {\
16
\"ping1\": {\
17
\"exec\": \"ping 127.0.0.1\",\
18
\"extension\": \"log\"\
19
},\
20
\"ping2\": {\
21
\"exec\": \"ping 127.0.0.1\",\
22
\"extension\": \"log\"\
23
}\
24
}\
25
}\
26
}\
27
}"
28
;
29
30
void sleep_ms(int milliseconds)
31
{
32
#ifdef WIN32
33
Sleep(milliseconds);
34
#elif _POSIX_C_SOURCE >= 199309L
35
struct timespec ts;
36
ts.tv_sec = milliseconds / 1000;
37
ts.tv_nsec = (milliseconds % 1000) * 1000000;
38
nanosleep(&ts, NULL);
39
#else
40
usleep(milliseconds * 1000);
41
#endif
42
}
43
44
int quit_after_time(void* msec)
45
{
46
sleep_ms(*(int*)(msec));
47
libdaggy_app_stop();
48
return 0;
49
}
50
51
void on_daggy_state_changed(DaggyCore core, DaggyStates state);
52
53
void on_provider_state_changed(DaggyCore core, const char* provider_id, DaggyProviderStates state);
54
void on_provider_error(DaggyCore core, const char* provider_id, DaggyError error);
55
56
void on_command_state_changed(DaggyCore core, const char* provider_id, const char* command_id, DaggyCommandStates state, int exit_code);
57
void on_command_stream(DaggyCore core, const char* provider_id, const char* command_id, DaggyStream stream);
58
void on_command_error(DaggyCore core, const char* provider_id, const char* command_id, DaggyError error);
59
60
int main(int argc, char** argv)
61
{
62
DaggyCore core;
63
libdaggy_app_create(argc, argv);
64
libdaggy_core_create(json_data, Json, &core);
65
libdaggy_connect_aggregator(core,
66
on_daggy_state_changed,
67
on_provider_state_changed,
68
on_provider_error,
69
on_command_state_changed,
70
on_command_stream,
71
on_command_error);
72
libdaggy_core_start(core);
73
int time = 5000;
74
libdaggy_run_in_thread(quit_after_time, &time);
75
return libdaggy_app_exec();
76
}
77
78
void on_daggy_state_changed(DaggyCore core, DaggyStates state)
79
{
80
printf("Daggy state changed: %d\n", state);
81
}
82
83
void on_provider_state_changed(DaggyCore core, const char* provider_id, DaggyProviderStates state)
84
{
85
printf("Provider %s state changed: %d\n", provider_id, state);
86
}
87
88
void on_provider_error(DaggyCore core, const char* provider_id, DaggyError error)
89
{
90
printf("Provider %s error. Code: %d, Category: %s\n", provider_id, error.error, error.category);
91
}
92
93
void on_command_state_changed(DaggyCore core, const char* provider_id, const char* command_id, DaggyCommandStates state, int exit_code)
94
{
95
printf("Command %s in provider %s state changed: %d\n", command_id, provider_id, state);
96
}
97
98
void on_command_stream(DaggyCore core, const char* provider_id, const char* command_id, DaggyStream stream)
99
{
100
printf("Command %s in provider %s has stream from session %s: %li\n", command_id, provider_id, stream.session, stream.seq_num);
101
}
102
103
void on_command_error(DaggyCore core, const char* provider_id, const char* command_id, DaggyError error)
104
{
105
printf("Command %s in provider %s has error. Code: %d, Category: %s\n", command_id, provider_id, error.error, error.category);
106
}
Copied!

Check installation of Daggy Console application

1
daggy --help
2
Usage: daggy [options] *.yaml|*.yml|*.json
3
4
Options:
5
-o, --output <folder> Set output folder
6
-f, --format <json|yaml> Source format
7
-i, --stdin Read data aggregation sources from stdin
8
-t, --timeout <time in ms> Auto complete timeout
9
-h, --help Displays help on commandline options.
10
--help-all Displays help including Qt specific options.
11
-v, --version Displays version information.
12
13
Arguments:
14
file data aggregation sources file
Copied!

Getting Started data aggregation and streaming with Daggy Console Application

Simple Sources

Create simple.yaml
1
sources:
2
localhost:
3
type: local
4
commands:
5
pingYa:
6
exec: ping ya.ru
7
extension: log
Copied!
Run daggy
1
daggy simple.yaml
Copied!
Check console output
1
23:07:23:977 | AppStat | Start aggregation in 01-04-20_23-07-23-977_simple
2
23:07:23:977 | ProvStat | localhost | New state: Started
3
23:07:23:977 | CommStat | localhost | pingYa | New state: Starting
4
23:07:23:977 | CommStat | localhost | pingYa | New state: Started
Copied!
There are all commands from simple.yaml/simple.json are streams in 01-04-20_23-07-23-977_simple with output files
Tailing streams from Simple Data Source
1
tail -f 01-04-20_23-07-23-977_simple/*
2
64 bytes from ya.ru (87.250.250.242): icmp_seq=99 ttl=249 time=21.2 ms
3
64 bytes from ya.ru (87.250.250.242): icmp_seq=100 ttl=249 time=18.8 ms
4
64 bytes from ya.ru (87.250.250.242): icmp_seq=101 ttl=249 time=23.5 ms
5
64 bytes from ya.ru (87.250.250.242): icmp_seq=102 ttl=249 time=18.8 ms
6
64 bytes from ya.ru (87.250.250.242): icmp_seq=103 ttl=249 time=18.8 ms
7
64 bytes from ya.ru (87.250.250.242): icmp_seq=104 ttl=249 time=17.4 ms
8
64 bytes from ya.ru (87.250.250.242): icmp_seq=105 ttl=249 time=17.4 ms
9
64 bytes from ya.ru (87.250.250.242): icmp_seq=106 ttl=249 time=20.1 ms
10
64 bytes from ya.ru (87.250.250.242): icmp_seq=107 ttl=249 time=25.8 ms
11
64 bytes from ya.ru (87.250.250.242): icmp_seq=108 ttl=249 time=35.1 ms
12
64 bytes from ya.ru (87.250.250.242): icmp_seq=109 ttl=249 time=21.1 ms
Copied!
Stop data aggregation and streaming
Type CTRL+C for stopping data aggregation and streaming. Type CTRL+C twice for hard stop application, without waiting cancelation of child local and remote processes.
1
23:07:23:977 | AppStat | Start aggregation in 01-04-20_23-07-23-977_simple
2
23:07:23:977 | ProvStat | localhost | New state: Started
3
23:07:23:977 | CommStat | localhost | pingYa | New state: Starting
4
23:07:23:977 | CommStat | localhost | pingYa | New state: Started
5
^C23:17:56:667 | ProvStat | localhost | New state: Finishing
6
23:17:56:668 | CommStat | localhost | pingYa | New state: Finished. Exit code: 0
7
23:17:56:668 | ProvStat | localhost | New state: Finished
8
23:17:56:668 | AppStat | Stop aggregation in 01-04-20_23-07-23-977_simple
Copied!
Investigate aggregated data
1
ls -l 01-04-20_23-07-23-977_simple/
2
-rw-r--r-- 1 muxa muxa 45574 апр 1 23:17 localhost-pingYa.log
Copied!

Example of Data Aggregation Sources with multiple commands and remote data aggregation and streaming

1
aliases:
2
- &my_commands
3
pingYa:
4
exec: ping ya.ru
5
extension: log
6
pingGoo:
7
exec: ping goo.gl
8
extension: log
9
10
- &ssh_auth
11
user: {{env_USER}}
12
passphrase: {{env_PASSWORD}}
13
14
sources:
15
localhost:
16
type: local
17
commands: *my_commands
18
remotehost:
19
host: 192.168.1.9
20
type: ssh2
21
parameters: *ssh_auth
22
commands: *my_commands
23
remotehost2:
24
host: 192.168.1.9
25
type: ssh2
26
parameters: *ssh_auth
27
commands: *my_commands
28
remotehost3:
29
host: 192.168.1.9
30
type: ssh2
31
parameters: *ssh_auth
32
commands: *my_commands
Copied!
Last modified 3mo ago