Azure Archives - Page 6 of 11 - - Page 6

Category Archives: Azure

Azure Databricks – Part 1 – How to create Azure Databricks workspace and a Spark Cluster?

In this blog, we will learn how to create Azure Databricks workspace and a Spark Cluster step by step using the Azure portal. Create Azure Databricks workspace: Step 1: To create Azure Databricks workspace, sign in to the Azure portal. In the upper-left corner of the home page, select Create a resource. In the Search, the Marketplace box, enter Azure Databricks and select and press enter Step 2: Select Azure Databricks from the search result and click on the create button. Step 3: Click on the create button and enter the following information Subscription Resource group Workspace name Region Pricing tier Step 4: Click the Review + create tab before click on the create button. Once you click on the create button it will take 3 to 4 minutes to create a resource. Create a Spark Cluster in Azure Databricks: Step 1: In the Azure portal, go to the Databricks workspace that you created, and then click Launch Workspace Step 2: You are redirected to the Azure Databricks portal. From the portal, click New Cluster. Step 3: In the New cluster page, provide the values to create a cluster. Hope this will help.

Share Story :

Azure Databricks – Part 2 – How to read Amazon DynmoDB table data using NoteBooks

In this blog, we will learn how to connect AWS DynmoDB and read the table data using Python script step by step. Step 1: In the left pane, select Azure Databricks. From the Common Tasks, select New Notebook. Step 2: In the Create Notebook dialog box, enter a name, select Python as the language, and select the Spark cluster that you created earlier. Step 3: Once the Notebook creates you can write a python script to connect AWS DynmoDB using the boto3 client library. To connect AWS DynmoDB you must have an AWS access key ID and AWS secret access key. Python script : # Databricks notebook source import boto3 import pandas as pd session = boto3.session.Session(aws_access_key_id=’your AWS access key ID’,aws_secret_access_key=’your AWS secret access key’,region_name=’your region’) dynamodb = session.resource(“dynamodb”) table = dynamodb.Table(“Table Name”) response = table.scan() items = response[“Items”] data = pd.DataFrame(items) output = data.to_csv (index_label=”idx”, encoding = “utf-8”) print(output) Step 4: Now you can check the output by pressing the Shift + Enter key or click on the Run cell. Hope this will help.

Share Story :

SQL Trigger not populating with Table in Logic App

Wondered How to solve SQL triggered Azure Logic Apps issue of not being able to select your table in dropdown? This blog will help you fix this issue.

Share Story :

ADF’s Wrangling Data Flow (Power Query)– How do you get matched rows from the two data sources using Inner Joins?

Posted On April 25, 2021 by Sandip Patel Posted in Tagged in

In this blog, we will learn how to get matched rows from the two data sources using inner join in ADF’s Wrangling Data Flow step by step. Step 1: Add a Power query flow as per the below screenshot. Step 2: In the New power query give the proper power query name and add the data source that you want to merge. Here I am adding two datasets named “DS_EMP1” and “DS_EMP2”, both data sources have employee information. Step 3: By default, the UserQuery will point to the first dataset query. All the transformation should be done on the UserQuery. Step 4: Now click on Merge queries to merge your dataset. Step 5: select a table and matching columns to create a merge table, here I have select EmpID as a common key to merge the data, and the join kind will be “Inner”. Step 6: Once you click the OK button, you got a warning “Nested join must be expanded”. Step 7: Click on expand dataset button to expand your result and select columns whatever you want from the other data source, here in my case both the datasets have the same column name so I deselect all the columns from the result dataset. Step 8: Now the UserQuery will show the matched rows, that’s all you need to do to get matched rows in two data sources. Hope this will help.

Share Story :

ADF’s Mapping Data flows – How do you get matched rows from the two data sources using Inner Joins?

Posted On March 23, 2021 by Sandip Patel Posted in Tagged in

In this blog, we will learn how to get matched rows from the two data sources using inner join in ADF’s Mapping Data flows step by step. Step 1: Add a data flow activity and name as “InnerJoin_Test”, in the settings tab add a new data flow. Select the Source Settings tab, add a source transformation, and connect it to one of your datasets. Step 2: In the Data preview tab you can see your data. Step 3: Add another source and name “Employee2”, in the source settings tab connect it to one of your datasets. Step 4: In the Data preview tab you can see your data. Step 5: Add a Join transformation, named “InneJoin”. The Join transform will allow you to join 2 Data flow. In the Join settings tab set left the stream and right stream and select join type as inner. Apply to join conditions on the unique field, in this demo I pick up “Emp Id” as a join condition. Step 6: In the Data preview tab you can see the matched rows result, that’s all you need to do to get matched rows in two data sources. Hope this will help.

Share Story :

ADF’s Mapping Data flows – How do you get distinct rows and rows count from the data source?

Posted On March 23, 2021 by Sandip Patel Posted in Tagged in

In this blog, we will learn how to get distinct rows and rows count from the data source via ADF’s Mapping Data flows step by step. Step 1: Create an Azure Data Pipeline. Step 2: Add a data flow activity and name as “DistinctRows”. Step 3: Go to settings and add a new data flow. Select the Source Settings tab, add a source transformation, and connect it to one of your datasets. Step 3: In the Projection tab, it allows you the change the column data type. Here I have changed my Emp ID column to Integer. Step 4: In the Data preview tab you can see your data. Step 5: Add an Aggregate transformation, named “DistinctRows”. In the group by settings, you need to choose which column or combination of columns will make up the key(s) for ADF to determine distinct rows, here in this demo I pick up “Emp ID” as my key columns. Step 6: The inherent nature of the aggregate transformation is to block all metadata columns not used in the aggregate. But here, we are using the aggregate to filter out non-distinct rows, so we need every column from the original dataset. To do this, go to the aggregate settings and choose the column pattern. Here, you will need to make a choice between including the first set of values from the duplicate rows, or the last. Essentially, choose which row you want to be the source of truth. Step 7: That’s all you need to do to find distinct rows in your data, click on the Data preview tab to see the result. You can see the duplicate data have been removed. Step 8: The row counts are just aggregate transformation, to create a row counts go to Aggregate settings and use the function count(1). This will create a running count of every row. Hope this will help.

Share Story :

Changing the Process of a Project already created in Azure DevOps

While setting up and working on a project in Azure DevOps sometimes we realised that the process we have selected is not what we need for the current project and we want to change the process in AzureDevOps without loosing any task. Before making any changes, we should keep a check i.e. The inherited process you are trying to move the project should contain at least one of the expected work items Open the project and click on the project setting as shown in the screenshot below In the project setting you can see the project process as “Agile”, click on it. As you click on it, you will be redirected to the below page where you have to select the project and click on the three vertical dots for changing the process. As you click on change a process, a panel will be open from the left-hand side from where you need to select the process. As you select the process and click on save, the project process will get change You can go back to the project and see, all the tasks visible which you can move as per the new process.

Share Story :

Let’s get started with Azure Function for Dynamics 365 CRM: Part 2 [Cloud Deployment]

In the previous blog, we have learned how to create an azure function to connect with Dynamics 365 CRM and create an account record whenever an azure function is triggered by the HTTP request. [Link]. In this blog, we will learn how to deploy the Azure Function App to Azure Cloud so that we can trigger that function anywhere. Prerequisite 1. Microsoft Azure Account 2. Active Subscription [Create a Free trial or Pay-as-you-Go Subscription] If you are creating an azure function for learning purposes, then go with Free Trial but if you are working on the development for your organization or client then go for a Pay-as-you-Go subscription. Step 1: Create a resource on Azure Portal for Azure Function Deployment. Login to the portal.azure.com with your account and click on Create a resource: Create a resource for Function App as mentioned below screenshot: Here we will create a new resource group but If you don’t have any existing resources. Following are configuration details that you need to fill during the creation of the resource. Function Name: It is a global URL to access the Azure Function App and it must be unique. Publish: You can directly publish your code or use Docker Container for publishing your Azure function Runtime stack: Here we are building the .net application so we will choose .Net as our Runtime stack to support Azure Function. We have multiple options we can create an azure function for Nodejs, Python, Java, Powershell core or you can use customer handler. After configuration are you ready to click on “Review and Create” It will take a few minutes to create and deploy the Azure Function App in the cloud. Step 2: Publishing the Azure Function from Visual Studio: Open the Azure Function project in Visual studio. Right-click on the Project and click on the Azure. You will get the below screen on that we need to click on the Start. Now we will choose Azure as we are deploying the Azure Function on the Azure Cloud and click on Next: Once you select the azure then it will open the configuration screen. In the configuration window, you need to log in with Azure Account with Active Subscription. Select the resource group and select the function app as mentioned below screenshot. After the configuration is finished click on Publish. It will take a few minutes to deploy the application to the cloud. Step 3: Get Azure Function URL from Azure Portal. Open newly create Function App in the Azure portal app and click on the Function App: You will find the function1 has been deployed. Click on that function to open it. To Get the Function URL click on the Get Function URL and copy the URL. Testing We will require the API testing tool, here I am using Postman and the following is the link to download “Postman”. https://www.postman.com/downloads/ Open the Postman and click on the create a new tab. Select request as POST and paste the URL: After pasting the URL, click on Send: Now, we will take look at Dynamics 365 CRM environment and check whether the account is created or not. Before After: Stay Tuned for the next Blog. In the next blog, we will create a Contact Us form in HTML and post data using Azure Function Dynamics 365 CRM to store the responses from your website. External Links: Azure Function Pricing: https://azure.microsoft.com/en-in/pricing/details/functions/

Share Story :

How to GET records from Salesforce using Logic App

Learn how to fetch Salesforce Records in Azure Logic App using two different ways.

Share Story :

How to GET records from Salesforce using Logic App

Learn how you can easily use SOQL query to fetch Salesforce records in Logic App

Share Story :

SEARCH BLOGS:

[gravityform id="36" ajax="true"]

FOLLOW CLOUDFRONTS BLOG :