Enhancement - import Ansible JSON

Hello

When comparing DATAGERRY object import/export JSON with Ansible setup module output JSON we can see a lot of differences.

DATAGERRY:

  {
    "name": "hostname",
    "value": "defralin1"
  },
  {
    "name": "domain",
    "value": "example.net"
  },
  {
    "name": "fqdn",
    "value": "defralin1.example.net"
  },
  {
    "name": "architecture",
    "value": "x86_64"
  },
  {
    "name": "biosvendor",
    "value": "VMware, Inc."
  },
  {
    "name": "biosversion",
    "value": "VMW71.00V.13989454.B64.1906190538"
  },
  {
    "name": "blockdevices",
    "value": "sdb,sda"
  }

ANSIBLE:

"ansible_processor_cores": 1,
"ansible_processor_count": 8,
"ansible_processor_threads_per_core": 1,
"ansible_processor_vcpus": 8,
"ansible_product_name": "VMware Virtual Platform",
"ansible_product_serial": "VMware-42 0c 01 1f dc 95 4d 4a-c3 d6 6b 7f 5b c9 74 61",
"ansible_product_uuid": "1f010c42-95dc-4a4d-c3d6-6b7f5bc97461",
"ansible_product_version": "None",
"ansible_python": {
  "executable": "/usr/bin/python3",
  "has_sslcontext": true,
  "type": "cpython",
  "version": {
    "major": 3,
    "micro": 10,
    "minor": 6,
    "releaselevel": "final",
    "serial": 0
  },
  "version_info": [
    3,
    6,
    10,
    "final",
    0
  ]
},

As we can see the biggest difference is that Ansible simply uses “name” and respective “value” with data, DATAGERRY additionally puts keywords “name” and “value” before the actual names and values, to properly identify them I assume.

To work around this we are now in the process of developing scripts that would “translate” the Ansible format for DATAGERRY so we basically create import JSONs from scratch. I guess it would be much simpler to implement Ansible JSON import on the DATAGERRY side, what do you guys think?

The biggest issue we encountered until now is actually the handling of “public_id”. We assume that every object in DATAGERRY is being handled by this value and it has to exist in the JSON file ready for import. The issue is that if we want to prepare an import file for hundred servers we have to assign a new public_id to every single one of them otherwise the import would fail. And to do that we need to know what is the next available/free public_id in DATAGERRY and start with it when creating new JSON file/-s. It would be great if we could import these files without public_id and DATAGERRY on its own will assign respective numbers. I think this is a basic feature for every company that starts working with your product and importing hundreds or thousands of objects.

Sto

Hi @sto,

when you import objects from a JSON file (or also from a CSV file), you can simply leave out the public_id and DATAGERRY will use the next available public ID.

For one of the next feature releases (I guess for 1.6), we have in mind to implement an Import Daemon, which will have the possibility to import data from different sources, like external databases. Ansible could be a source for that. We haven’t planed that functionality yet, but we will do that in the next couple of weeks.

Hey @mbatz

Thank you for this info, we will test importing without public_id in a moment.

Also great news on importing from different data sources!

Sto