nifi-iotdb-bundle
About 2 min
nifi-iotdb-bundle
Apache NiFi Introduction
Apache NiFi is an easy to use, powerful, and reliable system to process and distribute data.
Apache NiFi supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic.
Apache NiFi includes the following capabilities:
- Browser-based user interface
- Seamless experience for design, control, feedback, and monitoring
- Data provenance tracking
- Complete lineage of information from beginning to end
- Extensive configuration
- Loss-tolerant and guaranteed delivery
- Low latency and high throughput
- Dynamic prioritization
- Runtime modification of flow configuration
- Back pressure control
- Extensible design
- Component architecture for custom Processors and Services
- Rapid development and iterative testing
- Secure communication
- HTTPS with configurable authentication strategies
- Multi-tenant authorization and policy management
- Standard protocols for encrypted communication including TLS and SSH
PutIoTDB
This is a processor that reads the content of the incoming FlowFile as individual records using the configured 'Record Reader' and writes them to Apache IoTDB using native interface.
Properties of PutIoTDB
property | description | default value | necessary |
---|---|---|---|
Host | The host of IoTDB. | null | true |
Port | The port of IoTDB. | 6667 | true |
Username | Username to access the IoTDB. | null | true |
Password | Password to access the IoTDB. | null | true |
Record Reader | Specifies the type of Record Reader controller service to use for parsing the incoming data and determining the schema. | null | true |
Schema | The schema that IoTDB needs doesn't support good by NiFi. Therefore, you can define the schema here. Besides, you can set encoding type and compression type by this method. If you don't set this property, the inferred schema will be used. It can be updated by expression language. | null | false |
Aligned | Whether using aligned interface? It can be updated by expression language. | false | false |
MaxRowNumber | Specifies the max row number of each tablet. It can be updated by expression language. | 1024 | false |
Inferred Schema of Flowfile
There are a couple of rules about flowfile:
- The flowfile can be read by
Record Reader
. - The schema of flowfile must contains a field
Time
, and it must be the first. - The data type of time must be
STRING
orLONG
. - Fields excepted time must start with
root.
. - The supported data types are
INT
,LONG
,FLOAT
,DOUBLE
,BOOLEAN
,TEXT
.
Convert Schema by property
As mentioned above, converting schema by property which is more flexible and stronger than inferred schema.
The structure of property Schema
:
{
"timeType": "LONG",
"fields": [{
"tsName": "root.sg.d1.s1",
"dataType": "INT32",
"encoding": "RLE",
"compressionType": "GZIP"
}, {
"tsName": "root.sg.d1.s2",
"dataType": "INT64",
"encoding": "RLE",
"compressionType": "GZIP"
}]
}
Note
- The first column must be
Time
. The rest must be arranged in the same order as infield
of JSON. - The JSON of schema must contain
timeType
andfields
. - There are only two options
LONG
andSTRING
fortimeType
. - The columns
tsName
anddataType
must be set. - The tsName must start with
root.
. - The supported
dataTypes
areINT32
,INT64
,FLOAT
,DOUBLE
,BOOLEAN
,TEXT
. - The supported
encoding
arePLAIN
,DICTIONARY
,RLE
,DIFF
,TS_2DIFF
,BITMAP
,GORILLA_V1
,REGULAR
,GORILLA
. - The supported
compressionType
areUNCOMPRESSED
,SNAPPY
,GZIP
,LZO
,SDT
,PAA
,PLA
,LZ4
.
Relationships
relationship | description |
---|---|
success | Data can be written correctly or flow file is empty. |
failure | The shema or flow file is abnormal. |