NiC IT Academy

IICS-CDI-Interview Question Set 03

Published On: 10 January 2024

Last Updated: 11 September 2024

No Responses

Informatica IDMC/IICS interview questions and answers

 

41. What are the Field Name conflicts in IICS, and how can we resolve them?

The cloud mapping designer produces a Field Name Conflict mistake when the same label stems from different changes right into a downstream change. Moreover, you can solve the dispute by renaming the fields in the difficult improvement itself or generating a field guideline in the downstream change to massively Rename industries by incorporating a prefix or a suffix to every inbound area.

42. What is an Intelligent structure model (ISM)?

A CLAIRE® intelligent structure model is an asset that Intelligent Structure Discovery creates based on input that represents the data that you expect the model to parse at run time.

Intelligent Structure Discovery determines the underlying patterns and structures of the input that you provide for the model and creates a model that can be used to transform, parse, and generate output groups.

Intelligent Structure Discovery creates a model that expresses the expected output data. You can use an intelligent structure model in mappings to parse unstructured, semi-structured, or structured data.

You can create models from the following input types:

Text files, including delimited files such as CSV files and complex files that contain textual hierarchies.

  • Machine-generated files such as weblogs and clickstreams
  • JSON files
  • XML files
  • ORC files
  • Avro files
  • Parquet files
  • Microsoft Excel files
  • Data within PDF form fields
  • Data within Microsoft Word tables
  • XSD files
  • Cobol copybooks

43. What is a Structure Parser Transformation in IICS?

The Structure Parser transformation transforms your input data into a user-defined structured format based on an intelligent structure model. You can use the Structure Parser transformation to analyze data such as log files, clickstreams, XML or JSON files, Word tables, and other unstructured or semi-structured formats.

To create an intelligent structure model, use Intelligent Structure Discovery. Intelligent Structure Discovery determines the underlying structure of a sample data file and creates a model of the structure.

Intelligent Structure Discovery creates the intelligent structure model based on a sample of your input data.

You can create models from the following input types:

Text files, including delimited files such as CSV files and complex files that contain textual hierarchies

  • Machine generated files such as weblogs and clickstreams
  • JSON files
  • XML files
  • ORC files
  • Avro files
  • Parquet files
  • Microsoft Excel files
  • Data within PDF form fields
  • Data within Microsoft Word tables
  • XSD files
  • Cobol copybooks

44. Can you explain the various services available in Secure Agent?

Data Integration Server: The Data Integration Server is the Secure Agent service that runs data integration jobs such as mapping, task, and task flow instances.

Mass Ingestion Service Overview: The Mass Ingestion Service is an application service in the Informatica domain that manages and validates mass ingestion specifications that you create in the Mass Ingestion tool.

Common Integration Components: The Common Integration Components service is the Secure Agent service that runs the commands specified in a Command Task step of a taskflow.

 Monitoring the OI Data Collector service: The OI Data Collector service runs the data collectors that collect the operational data and domain-related metadata used by Operational Insights. The Secure Agent uploads the collected operational data and domain-related metadata to Informatica Intelligent Cloud Services

Process Server: Process Server Process Server is the Secure Agent service that executes

  • Application Integration
  • processes, connectors, and connections

45. What is elastic mapping in IICS?

Elastic Mapping is a feature of Informatica Intelligent Cloud Services (IICS) that enables dynamic and scalable data integration. It allows mappings to automatically adapt to changing data quantities and processing demands by modifying computing resources. With Elastic Mapping, computational resources may be scaled up or down depending on workload, allowing for effective performance and cost optimization.

It employs parallel processing and automates resource management to distribute data integration duties across numerous nodes or instances, utilizing parallel processing and automating resource management. This functionality connects smoothly with existing IICS functions, boosting data integration processes’ scalability, performance, and cost-effectiveness.

46. How to migrate the Informatica power center to IICS?

Multiple phases are involved in the conversion from Informatica PowerCenter to Informatica Intelligent Cloud Services (IICS).

  • Assess and plan the migration by determining the required components, mappings, workflows, and dependencies.
  • Determine the migration project’s objectives, scope, and timeline.
  • Convert PowerCenter mappings to IICS format, replicate workflows, and configure connections to assess interoperability.
  • To ensure a successful migration, the migrated mappings, processes, and connections must be tested and validated.
  • Adjust and optimize the migrated processes and mappings to maximize the cloud-native capabilities of IICS.
  • Execute the migration by shifting the workload from PowerCenter to IICS, monitoring the process, and conducting validation following the migration.

After a successful migration, decommission the PowerCenter environment and associated resources.

47. What are the different connection managers available in Informatica Cloud?

The different connection managers available in Informatica Cloud are:

  • FTP Connection Manager: This connection manager can be used to connect to an FTP server in order to transfer files.
  • HTTP Connection Manager: This connection manager can be used to connect to an HTTP server in order to retrieve or submit data.
  • JMS Connection Manager: This connection manager can be used to connect to a JMS server in order to send or receive messages.
  • MQ Connection Manager: This connection manager can be used to connect to an MQ server in order to send or receive messages.
  • ODBC Connection Manager: This connection manager can be used to connect to an ODBC data source in order to retrieve or submit data.
  • Oracle Connection Manager: This connection manager can be used to connect to an Oracle database in order to retrieve or submit data.
  • SQL Server Connection Manager: This connection manager can be used to connect to an SQL Server database in order to retrieve or submit data.

48. Explain the Hierarchical Schema in IICS.

Hierarchical Schema is an element where we can upload a JSON or an XML file that specifies the hierarchy of the output data. Hierarchy Parser transformation transforms the input according to the Hierarchical Schema that is related to the transformation.

49. Explain Dynamic Linking

Informatica Cloud enables us to establish a target file at runtime. We can use this feature only in mappings. In the target, we can select the “Create New at Runtime” option. We can choose a static filename that can be displaced with a new file whenever the mapping operates with the same name. We can also create a Dynamic filename such that whenever the mapping runs, we can create a file with the new name.

50. Explain Informatica Cloud REST API

Informatica Cloud REST API provides us with the capability to access the information through Informatica Intelligent Cloud Services. Developers also perform tasks like creating, update, delete connections, starting and monitoring jobs, etc.

51. How do we read JSON source files in IICS?

Through the Hierarchy Parser transformation of IICS, we read the JSON files. We have to specify the hierarchy of the output data. Hierarchy parser transformation transforms the input according to the Hierarchical Schema that is related to the transformation.

52. List some essential resources provided by the Informatica Cloud REST API

Some are the broadly used resources are:

  • Activity Monitor: Activity Monitor variable job details are returned.
  • Connection: Connection variable returns the data integration connections.
  • Activity log: This variable returns activity log job details.
  • Schedule: Schedule variable returns schedule details. It helps us in updating or creating the schedules.
  • Job: Task is started or stopped

53. What are the parameter types available in the Informatica Cloud?

IICS supports two types of parameters.

Input Parameter: Similar to a parameter in PowerCenter. The parameter value remains constant as the value defined in MCT or a Parameter file.

 In-out Parameter: Similar to a variable in PowerCenter. The In-out parameter can be a constant or change value within a single task run.

54. To perform Incremental Loading, What system variables are available in IICS?

IICS supplies accessibility to the observing system variables, which may be utilized as information filter variables to filter recently put or upgraded reports.

$LastRunTime gained the final opportunity when the activity operated successfully.

$LastRunDate gains merely the latter-day on which the job ran effectively.

The values of $LastRunDate and $Lastruntime are kept in the Informatica Cloud repository/server. It is not feasible to supersede the relevance of these criteria.

55. Differentiate between the connected and unconnected sequence generator transformation in Informatica Cloud Data Integration.

Sequence generators can easily be used in two diverse techniques in the Informatica cloud. One with Incoming fields disabled as well as the various others with inbound lots not incapacitated. The disparity between the pattern power generator with incoming industries enabled as well as impaired is, when the NEXTVAL field is mapped to various makeovers,

  • Sequence generators and incoming domain names certainly not disabled will create the same pattern of numbers for each downstream conversion.
  • Sequence generator and incoming components disabled will undoubtedly create an inimitable sequence of amounts for each downstream change.

56. Explain Partitioning in Informatica Cloud Data Integration

Dividing is only making it possible for the similarity dispensation of the information through different pipelines. Along with the separation permitted, you may select the number of partitions for the mapping. The DTM method then creates an audience thread, transformation thread, and writer string for every section, refining the records simultaneously, and lowering the job’s completion time. Partitions could be made possible by setting up the Source improvement in the applying designer. There are two primary dividing strategies supported in Informatica Cloud Data Integration.

  • Pass through Partitioning: The mapping task processes data without redistributing rows among partitions. All rows in a single partition stay in the partition. Choose pass-through partitioning when you want to create additional partitions to improve performance, but do not want to change the distribution of data across partitions. You can use this method for sources such as Amazon S3, Netezza, and Teradata.
  • Key Range Partitioning distributes the data into numerous dividers based upon the separating essential decided on and the stability of the values described for it. You need to opt for ground as a separating key and explain the market value’s start and end ranges.
  • Fixed Partitioning could be allowed for resources that are certainly not relational or even help vital assortment dividing. It would help if you chose a lot of dividers through passing a worth.

57. How to pass data from one mapping to another in Informatica Cloud Data Integration?

The records can quickly be approved from one Mapping task to an additional one in Informatica Cloud Data Integration during a Task flow with parameters. The Mapping Task, which provides the data, ought to possess an In-Out Parameter defined utilizing Set Variable functions. The Mapping Task that obtains the information ought to either include an Input guideline or an input parameter specified in the mapping to review the data conceded from the upstream job.

58. What does Data Masking mean?

Another popular technique is to conceal the data by substituting accurate test data for sensitive column data. Informatica Cloud’s primary duty is to assist in the creation of a rule to hide sensitive data.

59. What is a user-defined function (UDF)?

A user-defined function is a reusable function that you can use in expressions. You can create a user-defined function to build a complex expression using the Informatica Intelligent Cloud Services transformation language. User-defined functions use the same syntax and can use the same transformation language components as transformation and field expressions.

60. What is the difference between a “Deployment Group” and a “Taskflow” in Informatica IICS?

A Deployment Group is a set of related objects (mappings, tasks, connections) that can be moved together between environments. A Taskflow is a sequence of tasks within a task that allows you to define conditional execution paths

Enroll for Informatica IICS Cloud Data Integration(CDI) – Self-Paced courses

Informatica IICS Online CDI course

 

 

Loading

Login with your email & password

Sign up with your email & password

Signup/Registration Form