{"id":659,"date":"2018-03-19T17:02:03","date_gmt":"2018-03-19T16:02:03","guid":{"rendered":"http:\/\/jenacopterlabs.de\/?page_id=659"},"modified":"2018-03-22T22:03:42","modified_gmt":"2018-03-22T21:03:42","slug":"python-data-preprocessing-in-pci-geomatica","status":"publish","type":"page","link":"https:\/\/jenacopterlabs.de\/?page_id=659","title":{"rendered":"Satellite Data Preprocessing with Python\/PCI I (Import &#038; Ortho)"},"content":{"rendered":"<p><strong>GEO214 \/409 \/402 Python Data Processing Code Examples &#8211; Section I<\/strong><\/p>\n<p>19.3.2018<\/p>\n<p>Batch multispectral data pre processing for RapidEye data in PCI Geomatica 2017. This will be extended over the next few months &#8211; I will keep a date imprint here and there to make the updates a bit more transparent. This is more\/less a Reader for two of the sessions in GEOG214 and GEO402\/GEOG409.<\/p>\n<p><strong>Section I: NITF file import and reprojection in a given directory:<\/strong><\/p>\n<ol>\n<li>imports NITF RE L1B data to PIX files by also importing all metadata there is<\/li>\n<li>reprojects the files to UTM WGS84 CC,<\/li>\n<li>resamples to low resolution copies of the data files with 30 and 100m and<\/li>\n<li>reads header information and georef to text report<\/li>\n<li>reads simple band statistics to text report<\/li>\n<\/ol>\n<p><strong>Section II: \u00a0\u00a0<\/strong><a href=\"http:\/\/jenacopterlabs.de\/?page_id=703\"><strong>Cloud masking\/haze removal\/terrain Analysis &#8211; full processing in Atmosphere Correction package ATCOR.<\/strong><strong>\u00a0<\/strong><\/a><\/p>\n<ol>\n<li>Calculate a cloud, haze and water mask<\/li>\n<li>Remove the haze from satellite date<\/li>\n<li>Calculate DEM derivates: slope\/aspect\/shadow-cast layer\n<ol>\n<li>Optional: lowpass filter the DEM<\/li>\n<li>slope\/aspect\/shadow cast layer calculation<\/li>\n<\/ol>\n<\/li>\n<li>Atmospheric correction on haze corrected input calculating the local incidence angle and outputting: 16 Bit scaled topographic normalized reflectance.<\/li>\n<\/ol>\n<p><strong>Section III: \u00a0R statistics integration using rpy2 and numpy2<\/strong><\/p>\n<p><strong>Section IV: \u00a0 Integration of GRASS binaries into workflows\u00a0<\/strong><\/p>\n<p><strong>Section V: \u00a0 \u00a0 Integration of AGISOFT<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>1. Head section of a Python PCI script<\/strong><\/p>\n<p>Here we \u00a0load some PCI libraries and define the locale environment and add some specific inout and write txt libs.<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nprint &quot;******************* Import AND Reproject NTF **************************&quot;\r\n# bySHese 19.3.2018 RE L1b Import and Reprojection and Resamp to 30m and 100m\r\n# Script to loops through files in a folder and do the following:\r\n# - import NTF Files to PCIPIX file format\r\n# - reproject to utm wgs84 with defined projection parameters - should be changed \r\n# - reduces the resolution by a factor to get to 30m res and 100m low res files \r\n# - read some report to text file\r\n# all is done on a user input directory tree recursive using &quot;InFolder&quot;. \r\n\r\n# -------------------------------\r\n# Import the required algorithms\r\n# -------------------------------\r\nprint &quot; Importing PCI Libraries ... &quot;\r\n\r\nfrom pci.fun import *\r\nfrom pci.lut import *\r\nfrom pci.pcimod import *\r\nfrom pci.exceptions import * \r\nfrom pci.ortho import * #needed for ortho\r\nfrom pci.fimport import fimport #needed for fimport\r\nfrom pci.resamp import * #this is needed for resamp\r\nfrom sys import argv\r\nfrom pci.shl import * #this is for reading pci pix header info\r\nfrom pci.asl import * #this is needed for asl segment listing\r\nfrom pci.cdl import * #this is needed for cdl\r\nfrom pci.his import * #needed for his report\r\nfrom pci.nspio import Report, enableDefaultReport #to write pci reports from pace programs \r\nfrom pci.prorep import * #needed by prorep\r\n\r\nimport sys # These two libraries are required to extract file names from folders\r\nimport os\r\nimport fnmatch\r\n\r\n# The following locale settings are used to ensure that Python is configured\r\n# the same way as PCI's C\/C++ code. \r\nimport locale\r\nlocale.setlocale(locale.LC_ALL, &quot;&quot;)\r\nlocale.setlocale(locale.LC_NUMERIC, &quot;C&quot;)\r\n\r\nscript, InFolder, DEMFile = argv\r\n<\/pre>\n<pre>The comments are defined using a hash sign - you can also put these comments directly behind and coded line<\/pre>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\n# comments are defined after the hash sign\r\n<\/pre>\n<p><strong>2. The input section of the script <\/strong><\/p>\n<p>where path variables are defined &#8211; you usually want that from the user and not hard coded into your script &#8211; so f.e. using the following expression the user would have to start the script by typing:<\/p>\n<p>python pythonscriptname \/dir\/where\/the\/files\/are<\/p>\n<p>to start the script and to include the &#8220;InFolder&#8221; string\u00a0(in this example). We later use &#8220;InFolder&#8221; to place the string into pathname.<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nscript, InFolder, DEMFile = argv\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>Optional and an alternative would be:<\/p>\n<p>make a path definition later in the routine where the script waits for user input &#8211; this is ok but you have to interact and cannot batch these commands than.<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nInFolder = path_input(&quot;Please enter full input folder path: &quot;) # User will enter pathname here\r\n<\/pre>\n<p>However this is not so elegant because you cannot batch stack these python scripts later when you want to run these from another script.<\/p>\n<p><strong>3. The file selection that defines which files are processed<\/strong><\/p>\n<p>This section is programmed to recursively find all files with a specific naming convention and to process these later. Its a search routine that creates a container for the input filenames. We later just let the algorithm that processes data in PCI ingest the filename container that we create here:<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\n#\r\n# ---------------------\r\n# Set File variables\r\n# ---------------------\r\nprint &quot;Processing in folder path: &quot; + InFolder\r\n\r\n# ----------------------------------------------------\r\n# Create list of files to loop through from InFolder\r\n# ----------------------------------------------------\r\nprint &quot;   Creating list of valid files ...&quot;\r\n# This line is a wildcard filter. In this case only xml metadata files from RE will be used\r\nfile_filter = &quot;*metadata.xml&quot;  # this is nitf RE L1b data specific metadata file\r\n                               # here you can import also all metadata fields using fimport and\r\n                               # these fields will be important later f.e. with ATCOR\r\n# This list will become populated with the full pathnames of each file in InFolder\r\ninput_files_list = &#x5B;] \r\n\r\n# section from Geomatica\r\n# os.walk searches a directory tree and produces the file name of any file it encounters\r\n# r is the main directory\r\n# d is any subdirectory within r\r\n# f image file names within r\r\nfor r,d,f in os.walk(InFolder):\r\n        for file in fnmatch.filter(f, file_filter): # fnmatch.filter filters out non .pix files\r\n                print &quot;   Found valid input file: &quot;, file\r\n                input_files_list.append(os.path.join(r,file)) # The file name and full path are \r\n                                                              # added to input_files_list\r\n                                                              #\r\n<\/pre>\n<p><strong>4. The core processing section now processes the input files<\/strong><\/p>\n<p>and FIMPORT ( in this example) will ingest all files from the input_file_list. You set the various parameters first (fili, filo etc) and than execute the binary with the full listing of the variable names.<\/p>\n<p>Make sure that the spaces are all as deep as appropriate for your position in the for-loop. Python cares for spaces in every line!<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\n# ----------------------------------------------------------------------------------\r\n# FIMPORT algorithm \r\n# ----------------------------------------------------------------------------------\r\nprint &quot;&quot;\r\nprint &quot; Entering raw input information for FIMPORT algorithm ...&quot;\r\n# Run FIMPORT algorithm with each file listed in input_files_list\r\nprint &quot;&quot;\r\nprint &quot; Beginning the FIMPORT ...&quot;\r\n\r\nfor xml in input_files_list:\r\n # Using the _raw variables from above, define each of the required arguments\r\n fili = xml # input file\r\n filo = xml + &quot;-FIMPORT.pix&quot;\r\n dbiw = &#x5B;]\r\n poption = &quot;AVERAGE&quot; \r\n dblayout = &quot;BAND&quot;\r\n\r\n try: fimport(fili, filo, dbiw, poption, dblayout)\r\n except PCIException, e: print e\r\n except Exception, e: print e\r\n<\/pre>\n<p>fili = xml #uses all sections in input_files_list where we find the filename.<br \/>\nSince we now have the right pix files available we can start the reprojection of all files.<\/p>\n<p><strong>5. Reprojection of all PCIDSK files to UTM32 WGS84 <\/strong><\/p>\n<p>Here we use ORTHO (in Geomatica Version 2017). ORTHO has slightly modified syntax compared to ORTHO2.<br \/>\nSince we have a different\/new input files we repeat the input file definition for this script.<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\n# ----------------------------------------------------\r\n# Create list of files to loop through from InFolder for UTM Reprojection\r\n# ----------------------------------------------------\r\nprint &quot; Creating list of valid files for ORTHO ...&quot;\r\n# This line is a wildcard filter. In this case only FIMPORT.pix files will be used\r\nfile_filter = &quot;*FIMPORT.pix&quot; \r\n\r\n# This list will become populated with the full pathnames of each file in InFolder\r\ninput_files_list = &#x5B;] \r\n\r\n# os.walk searches a directory tree and produces the file name of any file it encounters\r\n# r is the main directory\r\n# d is any subdirectory within r\r\n# f image file names within r\r\nfor r,d,f in os.walk(InFolder):\r\n for file in fnmatch.filter(f, file_filter): # fnmatch.filter filters out non .pix files\r\n print &quot; Found valid input file: &quot;, file\r\n input_files_list.append(os.path.join(r,file)) # The file name and full path are added to input_files_list\r\n\r\n# ----------------------------------------------------------------------------------\r\n# Use ORTHO algorithm \r\n# ----------------------------------------------------------------------------------\r\nprint &quot; Entering raw input information for Ortho algorithm ...&quot;\r\n# Run ORTHO algorithm with each file listed in input_files_list\r\nprint &quot; Beginning ORTHO and RESAMP ...&quot;\r\n\r\nfor FIMPORT in input_files_list:\r\n # Using the _raw variables from above, define each of the required arguments\r\n mfile = FIMPORT # input file\r\n dbic = &#x5B;]\r\n mmseg = &#x5B;2]\r\n dbiw= &#x5B;]\r\n srcbgd = &quot;&quot;\r\n filo = FIMPORT + &quot;-ORTHO.pix&quot;\r\n ftype = &quot;PIX&quot;\r\n foptions = &quot;&quot;\r\n outbgd = &#x5B;] #float\r\n ulx = &quot;&quot;\r\n uly = &quot;&quot;\r\n lrx = &quot;&quot;\r\n lry = &quot;&quot;\r\n edgeclip = &#x5B;] #integer\r\n tipostrn = &quot;&quot;\r\n mapunits = &quot;UTM54 D000&quot; # make sure you are in the right UTM zone here: Tokyo 54S Sendai 54S London 30 Koeln\/Bonn\/HH\/Berlin 32 Paris Shanghai Dehli Mexico city NewYork Teheran LA Guangzhou Rom 32 Kulunda Steppe Russia: 44\r\n bxpxsz = &quot;6.5&quot;\r\n bypxsz = &quot;6.5&quot;\r\n filedem = DEMFile # DEMFile is set by user on CLI \r\n dbec = &#x5B;1] # we expect the DEM in channel 1\r\n backelev = &#x5B;]\r\n elevref = &quot;ELLIPS&quot; # ASTER GDEM2 this height reference is WGS84 \/ EGM96 Geoid\r\n elevunit = &quot;METER&quot; # \r\n elfactor = &#x5B;0.0,1.0] # \r\n proc = &quot;&quot;\r\n sampling = &#x5B;1]\r\n resample = &quot;CUBIC&quot;\r\n\r\n try: ortho(mfile, dbic, mmseg, dbiw, srcbgd, filo, ftype, foptions, outbgd, ulx, uly, lrx, lry, edgeclip, tipostrn, mapunits, bxpxsz, bypxsz, filedem, dbec, backelev, elevref, elevunit, elfactor, proc, sampling, resample)\r\n except PCIException, e: print e\r\n except Exception, e: print e\r\n #\r\n<\/pre>\n<p>Ortho takes some parameters to be correctly tuned for what you want. Some of these params however can be set to default values because they do not effect our processing right now.<\/p>\n<p>It is important to get mapunits right since we should know the right UTM zone for our data and D000 stands for WGS84 &#8211; change this as appropriate.<\/p>\n<p><strong>6. Finally we want to resample the files to low res versions<\/strong><\/p>\n<p>This is easily done with Resamp.<\/p>\n<p>Resamp just takes your input file and the desired resolution in Meters and resamples according to your resampling interpolation method.<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nprint &quot;   Beginning RESAMP 30m ...&quot;        \r\n        # Run RESAMP        \r\n        fili = FIMPORT + &quot;-ORTHO.pix&quot;\r\n        filo = FIMPORT + &quot;-ORTHO-RESAMP30m.pix&quot;\r\n        dbic\t=\t&#x5B;1,2,3,4,5]\r\n        dbsl\t=\t&#x5B;]\r\n        sltype\t=\t&quot;ALL&quot;\r\n        ftype\t=\t&quot;PIX&quot;\r\n        foptions\t= ' '\r\n        pxszout\t=\t&#x5B;30, 30]\t# output with meter resolution\r\n        resample\t=\t&quot;CUBIC&quot;\t# Cubic Convolution method\r\n        \r\n        try: resamp(fili, dbic, dbsl, sltype, filo, ftype, foptions, pxszout, resample)\r\n        except PCIException, e: print e\r\n        except Exception, e: print e\r\n        \r\n        print &quot;   Beginning RESAMP 100m ...&quot;        \r\n        fili = FIMPORT + &quot;-ORTHO.pix&quot;\r\n        filo = FIMPORT + &quot;-ORTHO-RESAMP100m.pix&quot;\r\n        dbic\t=\t&#x5B;1,2,3,4,5]\r\n        dbsl\t=\t&#x5B;]\r\n        sltype\t=\t&quot;ALL&quot;\r\n        ftype\t=\t&quot;PIX&quot;\r\n        foptions\t= ' '\r\n        pxszout\t=\t&#x5B;100, 100]\t# output with meter resolution\r\n        resample\t=\t&quot;CUBIC&quot;\t# Cubic Convolution method\r\n\r\n        try: resamp(fili, dbic, dbsl, sltype, filo, ftype, foptions, pxszout, resample )\r\n        except PCIException, e: print e\r\n        except Exception, e: print e\r\n        print &quot;Ortho finished&quot;, file\r\n<\/pre>\n<p>Resamp just takes the Filename from the Ortho processing and we just add new extensions to the filename to get the reduced version information in the filenaming.<\/p>\n<p><strong>7. Read some header and segment \/ channel information to text files:<\/strong><\/p>\n<p>Here we use SHL, PROREP, ASL and CDL to read the information into a textfile. This is comparable to the &#8220;REPORT&#8221; mode in EASI\/PACE and easily adds to an existing ASCII file.<\/p>\n<p>This is handy for keeping track of processing steps and parameter setup.<\/p>\n<p>Everything between the brackets:<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nprint &quot;Reporting now about all finished files&quot;\r\n        rep_file = FIMPORT + &quot;ORTHO-REPORT.txt&quot;\r\n        try: \r\n            Report.clear()\r\n            enableDefaultReport(rep_file)\r\n<\/pre>\n<p>and the closing sequence<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">&amp;lt;\/pre&amp;gt;\r\n&amp;lt;pre&amp;gt;  finally: \r\n            enableDefaultReport('term')  # this will close the report file\r\n<\/pre>\n<p>_<br \/>\n&#8230; is written to the ASCII file rep_file.<\/p>\n<p>The full section just puts the reporting binaries from PCI exactly in between these expressions as in the following section:<\/p>\n<p>&#8230;<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\n# now lets check some of the pix data based info for the record - always handy to have - and we\r\n        # can look into these reports if needed - \r\n        # to do this we need the pci.nspio module with the report framework to handle text export write\r\n        # and close. \r\n        \r\n        print &quot;Reporting now about all finished files&quot;\r\n        rep_file = FIMPORT + &quot;ORTHO-REPORT.txt&quot;\r\n        try: \r\n            Report.clear()\r\n            enableDefaultReport(rep_file)\r\n        \r\n            # first lets look now into the header now\r\n            print &quot;   Reading Header of file: &quot; + file\r\n            file= FIMPORT + &quot;-ORTHO.pix&quot;\r\n            shl( file)  # starts the shl bin\r\n            \r\n            \r\n            # now lets read the georeferencing segment with projection information \r\n            print &quot;   Reading Georef Segment of file: &quot; + file\r\n            dbgeo=&#x5B;1]\r\n            prorep( file, dbgeo) # starts the prorep pci bin\r\n            \r\n            # now lets list all segments in file\r\n            print &quot;   Reading Segment List of file: &quot; + file\r\n            ltyp=&quot;FULL&quot;\r\n            aslt=&#x5B;]\r\n            assn=''\r\n            asl(file, ltyp, aslt, assn)\r\n            \r\n            # now checking channels and reporting in short mode\r\n            print &quot;   Listing all channels of file: &quot; + file\r\n            dbcl    =   &#x5B;]\r\n            dtyp     =   &quot;SHORT&quot;\r\n            cdl(file,dbcl,dtyp)      \r\n        \r\n        finally: \r\n            enableDefaultReport('term')  # this will close the report file\r\n        print &quot;FINISHED All&quot;\r\n<\/pre>\n<p><strong>8. Reading some basic statistics from the PCIDSK files:<\/strong><\/p>\n<p>If we want some more statistical insights its useful to read some more values into the report files. This can easily be done with HIS and SPLREG. Both are not in any way comparable to statistical analysis in R but we get some information about the raster data without the need to open (sometimes) very big raster data files. This can be handy to find out if something went wrong in our processing workflow.<\/p>\n<p>HIS is easily integrated in front of the Report &#8220;Close&#8221; sequence in the following code:<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\n finally printing simple descriptive statistics into the report\r\n            print &quot;   Reading simple band stats of file: &quot; + file\r\n            dbic\t=\t&#x5B;1,2,3,4,5]\t\r\n            gmod\t=\t'ON'\t# show graphic mode\r\n            cmod\t=\t'OFF'\t# no cumulative mode\r\n            pcmo\t=\t'OFF'\t# no percentage mode\r\n            nsam\t=\t&#x5B;]\t# no number of samples\r\n            trim\t=\t&#x5B;]\t# no trimming\r\n            hisw\t=\t&#x5B;]\t# no specific histogram window\r\n            mask\t=\t&#x5B;]\t# process entire image\r\n            imstat\t=\t&#x5B;]\t# parameter to receive output\r\n            his( file, dbic, gmod, cmod, pcmo, nsam, trim, hisw, mask, imstat )\r\n<\/pre>\n<p>But we could also integrate some more useful information about the band to band relations calculating SPLREG as in the following example:<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\n# and we add some linear band to band regressions to the report\r\n            dbic\t=\t&#x5B;1,2]\t# input channels\r\n            mask\t=\t&#x5B;]\t# process entire image\r\n            imstat\t=\t&#x5B;]\r\n            splreg( file, dbic, mask, imstat )\r\n            dbic\t=\t&#x5B;2,3]\t# input channels\r\n            mask\t=\t&#x5B;]\t# process entire image\r\n            imstat\t=\t&#x5B;]\r\n            splreg( file, dbic, mask, imstat )\r\n            dbic\t=\t&#x5B;3,4]\t# input channels\r\n            mask\t=\t&#x5B;]\t# process entire image\r\n            imstat\t=\t&#x5B;]\r\n            splreg( file, dbic, mask, imstat )\r\n            dbic\t=\t&#x5B;4,5]\t# input channels\r\n            mask\t=\t&#x5B;]\t# process entire image\r\n            imstat\t=\t&#x5B;]\r\n            splreg( file, dbic, mask, imstat )\r\n            dbic\t=\t&#x5B;3,5]\t# input channels\r\n            mask\t=\t&#x5B;]\t# process entire image\r\n            imstat\t=\t&#x5B;]\r\n            splreg( file, dbic, mask, imstat )\r\n<\/pre>\n<p>Finally dont forget to close the reporting sequence with the following snippet:<\/p>\n<pre class=\"brush: python; title: ; notranslate\" title=\"\">\r\nfinally: \r\n            enableDefaultReport('term')  # this will close the report file\r\n        print &quot;FINISHED All&quot;\r\n<\/pre>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>GEO214 \/409 \/402 Python Data Processing Code Examples &#8211; Section I 19.3.2018 Batch multispectral data pre processing for RapidEye data in PCI Geomatica 2017. This will be extended over the next few months &#8211; I will keep a date imprint here and there to make the updates a bit more transparent. This is more\/less a Reader for two of the&#46;&#46;&#46;<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":18,"menu_order":4,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_uag_custom_page_level_css":"","footnotes":""},"class_list":["post-659","page","type-page","status-publish","hentry"],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false,"post-thumbnail":false,"cd-small":false,"cd-medium":false,"cd-standard":false},"uagb_author_info":{"display_name":"S\u00f6ren Hese","author_link":"https:\/\/jenacopterlabs.de\/?author=1"},"uagb_comment_info":0,"uagb_excerpt":"GEO214 \/409 \/402 Python Data Processing Code Examples &#8211; Section I 19.3.2018 Batch multispectral data pre processing for RapidEye data in PCI Geomatica 2017. This will be extended over the next few months &#8211; I will keep a date imprint here and there to make the updates a bit more transparent. This is more\/less a&hellip;","_links":{"self":[{"href":"https:\/\/jenacopterlabs.de\/index.php?rest_route=\/wp\/v2\/pages\/659","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jenacopterlabs.de\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/jenacopterlabs.de\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/jenacopterlabs.de\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jenacopterlabs.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=659"}],"version-history":[{"count":45,"href":"https:\/\/jenacopterlabs.de\/index.php?rest_route=\/wp\/v2\/pages\/659\/revisions"}],"predecessor-version":[{"id":760,"href":"https:\/\/jenacopterlabs.de\/index.php?rest_route=\/wp\/v2\/pages\/659\/revisions\/760"}],"up":[{"embeddable":true,"href":"https:\/\/jenacopterlabs.de\/index.php?rest_route=\/wp\/v2\/pages\/18"}],"wp:attachment":[{"href":"https:\/\/jenacopterlabs.de\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=659"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}